Test Report: KVM_Linux_crio 20151

                    
                      33072eff0e89b858b45dc04bb45c552eedaf3583:2025-01-20:37991
                    
                

Test fail (14/308)

x
+
TestAddons/parallel/Ingress (159.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-158281 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-158281 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-158281 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aa058638-8c52-452d-80ca-c0225f49ce0e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [aa058638-8c52-452d-80ca-c0225f49ce0e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.003271654s
I0120 11:25:31.407439  949656 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-158281 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.128818445s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-158281 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-158281 -n addons-158281
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 logs -n 25: (1.203757961s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-057266                                                                     | download-only-057266 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| delete  | -p download-only-060504                                                                     | download-only-060504 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| delete  | -p download-only-057266                                                                     | download-only-057266 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-093509 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC |                     |
	|         | binary-mirror-093509                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37297                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-093509                                                                     | binary-mirror-093509 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC |                     |
	|         | addons-158281                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC |                     |
	|         | addons-158281                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-158281 --wait=true                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:24 UTC | 20 Jan 25 11:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:24 UTC | 20 Jan 25 11:24 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:24 UTC | 20 Jan 25 11:24 UTC |
	|         | -p addons-158281                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-158281 ip                                                                            | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-158281 ssh cat                                                                       | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | /opt/local-path-provisioner/pvc-154e1d54-dd50-44d3-a13f-5a4e77381800_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons disable                                                                | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC | 20 Jan 25 11:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-158281 ssh curl -s                                                                   | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:26 UTC | 20 Jan 25 11:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-158281 addons                                                                        | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:26 UTC | 20 Jan 25 11:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-158281 ip                                                                            | addons-158281        | jenkins | v1.35.0 | 20 Jan 25 11:27 UTC | 20 Jan 25 11:27 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:22:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:22:26.257097  950344 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:22:26.257389  950344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:22:26.257400  950344 out.go:358] Setting ErrFile to fd 2...
	I0120 11:22:26.257405  950344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:22:26.257654  950344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:22:26.258353  950344 out.go:352] Setting JSON to false
	I0120 11:22:26.259376  950344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14689,"bootTime":1737357457,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:22:26.259484  950344 start.go:139] virtualization: kvm guest
	I0120 11:22:26.261435  950344 out.go:177] * [addons-158281] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 11:22:26.262768  950344 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:22:26.262767  950344 notify.go:220] Checking for updates...
	I0120 11:22:26.265078  950344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:22:26.266191  950344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:22:26.267354  950344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:22:26.268430  950344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 11:22:26.269428  950344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:22:26.270586  950344 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:22:26.303218  950344 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 11:22:26.304263  950344 start.go:297] selected driver: kvm2
	I0120 11:22:26.304278  950344 start.go:901] validating driver "kvm2" against <nil>
	I0120 11:22:26.304290  950344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:22:26.304970  950344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:22:26.305091  950344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 11:22:26.319199  950344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 11:22:26.319257  950344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:22:26.319512  950344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 11:22:26.319545  950344 cni.go:84] Creating CNI manager for ""
	I0120 11:22:26.319601  950344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 11:22:26.319611  950344 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 11:22:26.319680  950344 start.go:340] cluster config:
	{Name:addons-158281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-158281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0120 11:22:26.319783  950344 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:22:26.321857  950344 out.go:177] * Starting "addons-158281" primary control-plane node in "addons-158281" cluster
	I0120 11:22:26.323057  950344 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 11:22:26.323104  950344 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 11:22:26.323126  950344 cache.go:56] Caching tarball of preloaded images
	I0120 11:22:26.323200  950344 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 11:22:26.323211  950344 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 11:22:26.323479  950344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/config.json ...
	I0120 11:22:26.323501  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/config.json: {Name:mk4ad6c05a9fc803bc10daf6aa0f9b7aafb97aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:26.323623  950344 start.go:360] acquireMachinesLock for addons-158281: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 11:22:26.323667  950344 start.go:364] duration metric: took 32.147µs to acquireMachinesLock for "addons-158281"
	I0120 11:22:26.323685  950344 start.go:93] Provisioning new machine with config: &{Name:addons-158281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-158281 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 11:22:26.323749  950344 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 11:22:26.325269  950344 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0120 11:22:26.325422  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:22:26.325451  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:22:26.339793  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
	I0120 11:22:26.340338  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:22:26.340933  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:22:26.340955  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:22:26.341321  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:22:26.341527  950344 main.go:141] libmachine: (addons-158281) Calling .GetMachineName
	I0120 11:22:26.341681  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:26.341816  950344 start.go:159] libmachine.API.Create for "addons-158281" (driver="kvm2")
	I0120 11:22:26.341851  950344 client.go:168] LocalClient.Create starting
	I0120 11:22:26.341891  950344 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 11:22:26.482593  950344 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 11:22:26.829879  950344 main.go:141] libmachine: Running pre-create checks...
	I0120 11:22:26.829904  950344 main.go:141] libmachine: (addons-158281) Calling .PreCreateCheck
	I0120 11:22:26.830324  950344 main.go:141] libmachine: (addons-158281) Calling .GetConfigRaw
	I0120 11:22:26.830732  950344 main.go:141] libmachine: Creating machine...
	I0120 11:22:26.830747  950344 main.go:141] libmachine: (addons-158281) Calling .Create
	I0120 11:22:26.830918  950344 main.go:141] libmachine: (addons-158281) creating KVM machine...
	I0120 11:22:26.830945  950344 main.go:141] libmachine: (addons-158281) creating network...
	I0120 11:22:26.832188  950344 main.go:141] libmachine: (addons-158281) DBG | found existing default KVM network
	I0120 11:22:26.833029  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:26.832870  950367 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201350}
	I0120 11:22:26.833071  950344 main.go:141] libmachine: (addons-158281) DBG | created network xml: 
	I0120 11:22:26.833090  950344 main.go:141] libmachine: (addons-158281) DBG | <network>
	I0120 11:22:26.833103  950344 main.go:141] libmachine: (addons-158281) DBG |   <name>mk-addons-158281</name>
	I0120 11:22:26.833114  950344 main.go:141] libmachine: (addons-158281) DBG |   <dns enable='no'/>
	I0120 11:22:26.833120  950344 main.go:141] libmachine: (addons-158281) DBG |   
	I0120 11:22:26.833127  950344 main.go:141] libmachine: (addons-158281) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 11:22:26.833135  950344 main.go:141] libmachine: (addons-158281) DBG |     <dhcp>
	I0120 11:22:26.833140  950344 main.go:141] libmachine: (addons-158281) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 11:22:26.833148  950344 main.go:141] libmachine: (addons-158281) DBG |     </dhcp>
	I0120 11:22:26.833152  950344 main.go:141] libmachine: (addons-158281) DBG |   </ip>
	I0120 11:22:26.833156  950344 main.go:141] libmachine: (addons-158281) DBG |   
	I0120 11:22:26.833162  950344 main.go:141] libmachine: (addons-158281) DBG | </network>
	I0120 11:22:26.833176  950344 main.go:141] libmachine: (addons-158281) DBG | 
	I0120 11:22:26.838476  950344 main.go:141] libmachine: (addons-158281) DBG | trying to create private KVM network mk-addons-158281 192.168.39.0/24...
	I0120 11:22:26.900844  950344 main.go:141] libmachine: (addons-158281) DBG | private KVM network mk-addons-158281 192.168.39.0/24 created
	I0120 11:22:26.900899  950344 main.go:141] libmachine: (addons-158281) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281 ...
	I0120 11:22:26.900913  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:26.900822  950367 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:22:26.900926  950344 main.go:141] libmachine: (addons-158281) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 11:22:26.901112  950344 main.go:141] libmachine: (addons-158281) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 11:22:27.205198  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:27.205051  950367 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa...
	I0120 11:22:27.445490  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:27.445345  950367 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/addons-158281.rawdisk...
	I0120 11:22:27.445522  950344 main.go:141] libmachine: (addons-158281) DBG | Writing magic tar header
	I0120 11:22:27.445532  950344 main.go:141] libmachine: (addons-158281) DBG | Writing SSH key tar header
	I0120 11:22:27.445539  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:27.445487  950367 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281 ...
	I0120 11:22:27.445651  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281
	I0120 11:22:27.445687  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281 (perms=drwx------)
	I0120 11:22:27.445699  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 11:22:27.445712  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:22:27.445722  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 11:22:27.445729  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 11:22:27.445742  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 11:22:27.445756  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 11:22:27.445767  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 11:22:27.445781  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 11:22:27.445794  950344 main.go:141] libmachine: (addons-158281) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 11:22:27.445800  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home/jenkins
	I0120 11:22:27.445808  950344 main.go:141] libmachine: (addons-158281) creating domain...
	I0120 11:22:27.445817  950344 main.go:141] libmachine: (addons-158281) DBG | checking permissions on dir: /home
	I0120 11:22:27.445821  950344 main.go:141] libmachine: (addons-158281) DBG | skipping /home - not owner
	I0120 11:22:27.446809  950344 main.go:141] libmachine: (addons-158281) define libvirt domain using xml: 
	I0120 11:22:27.446829  950344 main.go:141] libmachine: (addons-158281) <domain type='kvm'>
	I0120 11:22:27.446836  950344 main.go:141] libmachine: (addons-158281)   <name>addons-158281</name>
	I0120 11:22:27.446845  950344 main.go:141] libmachine: (addons-158281)   <memory unit='MiB'>4000</memory>
	I0120 11:22:27.446851  950344 main.go:141] libmachine: (addons-158281)   <vcpu>2</vcpu>
	I0120 11:22:27.446855  950344 main.go:141] libmachine: (addons-158281)   <features>
	I0120 11:22:27.446860  950344 main.go:141] libmachine: (addons-158281)     <acpi/>
	I0120 11:22:27.446864  950344 main.go:141] libmachine: (addons-158281)     <apic/>
	I0120 11:22:27.446869  950344 main.go:141] libmachine: (addons-158281)     <pae/>
	I0120 11:22:27.446875  950344 main.go:141] libmachine: (addons-158281)     
	I0120 11:22:27.446880  950344 main.go:141] libmachine: (addons-158281)   </features>
	I0120 11:22:27.446886  950344 main.go:141] libmachine: (addons-158281)   <cpu mode='host-passthrough'>
	I0120 11:22:27.446897  950344 main.go:141] libmachine: (addons-158281)   
	I0120 11:22:27.446904  950344 main.go:141] libmachine: (addons-158281)   </cpu>
	I0120 11:22:27.446909  950344 main.go:141] libmachine: (addons-158281)   <os>
	I0120 11:22:27.446914  950344 main.go:141] libmachine: (addons-158281)     <type>hvm</type>
	I0120 11:22:27.446919  950344 main.go:141] libmachine: (addons-158281)     <boot dev='cdrom'/>
	I0120 11:22:27.446926  950344 main.go:141] libmachine: (addons-158281)     <boot dev='hd'/>
	I0120 11:22:27.446931  950344 main.go:141] libmachine: (addons-158281)     <bootmenu enable='no'/>
	I0120 11:22:27.446935  950344 main.go:141] libmachine: (addons-158281)   </os>
	I0120 11:22:27.446939  950344 main.go:141] libmachine: (addons-158281)   <devices>
	I0120 11:22:27.446946  950344 main.go:141] libmachine: (addons-158281)     <disk type='file' device='cdrom'>
	I0120 11:22:27.446954  950344 main.go:141] libmachine: (addons-158281)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/boot2docker.iso'/>
	I0120 11:22:27.446962  950344 main.go:141] libmachine: (addons-158281)       <target dev='hdc' bus='scsi'/>
	I0120 11:22:27.446967  950344 main.go:141] libmachine: (addons-158281)       <readonly/>
	I0120 11:22:27.446974  950344 main.go:141] libmachine: (addons-158281)     </disk>
	I0120 11:22:27.446980  950344 main.go:141] libmachine: (addons-158281)     <disk type='file' device='disk'>
	I0120 11:22:27.446988  950344 main.go:141] libmachine: (addons-158281)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 11:22:27.446995  950344 main.go:141] libmachine: (addons-158281)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/addons-158281.rawdisk'/>
	I0120 11:22:27.447006  950344 main.go:141] libmachine: (addons-158281)       <target dev='hda' bus='virtio'/>
	I0120 11:22:27.447011  950344 main.go:141] libmachine: (addons-158281)     </disk>
	I0120 11:22:27.447021  950344 main.go:141] libmachine: (addons-158281)     <interface type='network'>
	I0120 11:22:27.447134  950344 main.go:141] libmachine: (addons-158281)       <source network='mk-addons-158281'/>
	I0120 11:22:27.447189  950344 main.go:141] libmachine: (addons-158281)       <model type='virtio'/>
	I0120 11:22:27.447203  950344 main.go:141] libmachine: (addons-158281)     </interface>
	I0120 11:22:27.447216  950344 main.go:141] libmachine: (addons-158281)     <interface type='network'>
	I0120 11:22:27.447248  950344 main.go:141] libmachine: (addons-158281)       <source network='default'/>
	I0120 11:22:27.447273  950344 main.go:141] libmachine: (addons-158281)       <model type='virtio'/>
	I0120 11:22:27.447293  950344 main.go:141] libmachine: (addons-158281)     </interface>
	I0120 11:22:27.447311  950344 main.go:141] libmachine: (addons-158281)     <serial type='pty'>
	I0120 11:22:27.447329  950344 main.go:141] libmachine: (addons-158281)       <target port='0'/>
	I0120 11:22:27.447344  950344 main.go:141] libmachine: (addons-158281)     </serial>
	I0120 11:22:27.447357  950344 main.go:141] libmachine: (addons-158281)     <console type='pty'>
	I0120 11:22:27.447369  950344 main.go:141] libmachine: (addons-158281)       <target type='serial' port='0'/>
	I0120 11:22:27.447380  950344 main.go:141] libmachine: (addons-158281)     </console>
	I0120 11:22:27.447388  950344 main.go:141] libmachine: (addons-158281)     <rng model='virtio'>
	I0120 11:22:27.447402  950344 main.go:141] libmachine: (addons-158281)       <backend model='random'>/dev/random</backend>
	I0120 11:22:27.447409  950344 main.go:141] libmachine: (addons-158281)     </rng>
	I0120 11:22:27.447421  950344 main.go:141] libmachine: (addons-158281)     
	I0120 11:22:27.447436  950344 main.go:141] libmachine: (addons-158281)     
	I0120 11:22:27.447448  950344 main.go:141] libmachine: (addons-158281)   </devices>
	I0120 11:22:27.447458  950344 main.go:141] libmachine: (addons-158281) </domain>
	I0120 11:22:27.447472  950344 main.go:141] libmachine: (addons-158281) 
	I0120 11:22:27.452938  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ce:49:e7 in network default
	I0120 11:22:27.453767  950344 main.go:141] libmachine: (addons-158281) starting domain...
	I0120 11:22:27.453789  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:27.453798  950344 main.go:141] libmachine: (addons-158281) ensuring networks are active...
	I0120 11:22:27.454501  950344 main.go:141] libmachine: (addons-158281) Ensuring network default is active
	I0120 11:22:27.454851  950344 main.go:141] libmachine: (addons-158281) Ensuring network mk-addons-158281 is active
	I0120 11:22:27.455430  950344 main.go:141] libmachine: (addons-158281) getting domain XML...
	I0120 11:22:27.456239  950344 main.go:141] libmachine: (addons-158281) creating domain...
	I0120 11:22:28.862293  950344 main.go:141] libmachine: (addons-158281) waiting for IP...
	I0120 11:22:28.863059  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:28.863450  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:28.863517  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:28.863456  950367 retry.go:31] will retry after 240.229861ms: waiting for domain to come up
	I0120 11:22:29.104968  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:29.105366  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:29.105418  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:29.105329  950367 retry.go:31] will retry after 279.52547ms: waiting for domain to come up
	I0120 11:22:29.386930  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:29.387408  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:29.387439  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:29.387361  950367 retry.go:31] will retry after 455.091815ms: waiting for domain to come up
	I0120 11:22:29.843617  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:29.844108  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:29.844172  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:29.844095  950367 retry.go:31] will retry after 433.03157ms: waiting for domain to come up
	I0120 11:22:30.278830  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:30.279207  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:30.279240  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:30.279148  950367 retry.go:31] will retry after 692.076175ms: waiting for domain to come up
	I0120 11:22:30.972371  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:30.972737  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:30.972772  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:30.972702  950367 retry.go:31] will retry after 747.053482ms: waiting for domain to come up
	I0120 11:22:31.721315  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:31.721767  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:31.721798  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:31.721729  950367 retry.go:31] will retry after 957.124515ms: waiting for domain to come up
	I0120 11:22:32.680917  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:32.681383  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:32.681411  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:32.681339  950367 retry.go:31] will retry after 1.394004029s: waiting for domain to come up
	I0120 11:22:34.076930  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:34.077311  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:34.077343  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:34.077284  950367 retry.go:31] will retry after 1.328010048s: waiting for domain to come up
	I0120 11:22:35.406763  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:35.407200  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:35.407221  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:35.407165  950367 retry.go:31] will retry after 1.859853352s: waiting for domain to come up
	I0120 11:22:37.268882  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:37.269277  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:37.269310  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:37.269227  950367 retry.go:31] will retry after 2.185783668s: waiting for domain to come up
	I0120 11:22:39.456803  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:39.457199  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:39.457378  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:39.457300  950367 retry.go:31] will retry after 3.175697326s: waiting for domain to come up
	I0120 11:22:42.635299  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:42.635687  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:42.635717  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:42.635641  950367 retry.go:31] will retry after 3.365359324s: waiting for domain to come up
	I0120 11:22:46.002725  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:46.003217  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find current IP address of domain addons-158281 in network mk-addons-158281
	I0120 11:22:46.003249  950344 main.go:141] libmachine: (addons-158281) DBG | I0120 11:22:46.003175  950367 retry.go:31] will retry after 3.623752852s: waiting for domain to come up
	I0120 11:22:49.628377  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.628945  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has current primary IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.629155  950344 main.go:141] libmachine: (addons-158281) found domain IP: 192.168.39.113
	I0120 11:22:49.629177  950344 main.go:141] libmachine: (addons-158281) reserving static IP address...
	I0120 11:22:49.629951  950344 main.go:141] libmachine: (addons-158281) DBG | unable to find host DHCP lease matching {name: "addons-158281", mac: "52:54:00:ea:42:b5", ip: "192.168.39.113"} in network mk-addons-158281
	I0120 11:22:49.701328  950344 main.go:141] libmachine: (addons-158281) DBG | Getting to WaitForSSH function...
	I0120 11:22:49.701362  950344 main.go:141] libmachine: (addons-158281) reserved static IP address 192.168.39.113 for domain addons-158281
	I0120 11:22:49.701376  950344 main.go:141] libmachine: (addons-158281) waiting for SSH...
	I0120 11:22:49.703890  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.704278  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:49.704311  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.704508  950344 main.go:141] libmachine: (addons-158281) DBG | Using SSH client type: external
	I0120 11:22:49.704536  950344 main.go:141] libmachine: (addons-158281) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa (-rw-------)
	I0120 11:22:49.704575  950344 main.go:141] libmachine: (addons-158281) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 11:22:49.704592  950344 main.go:141] libmachine: (addons-158281) DBG | About to run SSH command:
	I0120 11:22:49.704606  950344 main.go:141] libmachine: (addons-158281) DBG | exit 0
	I0120 11:22:49.830295  950344 main.go:141] libmachine: (addons-158281) DBG | SSH cmd err, output: <nil>: 
	I0120 11:22:49.830590  950344 main.go:141] libmachine: (addons-158281) KVM machine creation complete
	I0120 11:22:49.830848  950344 main.go:141] libmachine: (addons-158281) Calling .GetConfigRaw
	I0120 11:22:49.831463  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:49.831647  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:49.831822  950344 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 11:22:49.831840  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:22:49.833008  950344 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 11:22:49.833022  950344 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 11:22:49.833027  950344 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 11:22:49.833033  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:49.835235  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.835552  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:49.835577  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.835769  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:49.835977  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:49.836168  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:49.836300  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:49.836458  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:49.836712  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:49.836727  950344 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 11:22:49.929099  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 11:22:49.929120  950344 main.go:141] libmachine: Detecting the provisioner...
	I0120 11:22:49.929130  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:49.931693  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.932089  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:49.932116  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:49.932240  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:49.932437  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:49.932622  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:49.932784  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:49.932943  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:49.933112  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:49.933124  950344 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 11:22:50.030412  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 11:22:50.030504  950344 main.go:141] libmachine: found compatible host: buildroot
	I0120 11:22:50.030535  950344 main.go:141] libmachine: Provisioning with buildroot...
	I0120 11:22:50.030544  950344 main.go:141] libmachine: (addons-158281) Calling .GetMachineName
	I0120 11:22:50.030781  950344 buildroot.go:166] provisioning hostname "addons-158281"
	I0120 11:22:50.030806  950344 main.go:141] libmachine: (addons-158281) Calling .GetMachineName
	I0120 11:22:50.030989  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.033550  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.033947  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.033968  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.034123  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.034305  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.034462  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.034622  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.034797  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:50.034995  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:50.035012  950344 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-158281 && echo "addons-158281" | sudo tee /etc/hostname
	I0120 11:22:50.146381  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-158281
	
	I0120 11:22:50.146405  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.148734  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.149080  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.149111  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.149242  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.149424  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.149561  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.149689  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.149856  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:50.150013  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:50.150028  950344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-158281' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-158281/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-158281' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 11:22:50.254287  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 11:22:50.254323  950344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 11:22:50.254345  950344 buildroot.go:174] setting up certificates
	I0120 11:22:50.254366  950344 provision.go:84] configureAuth start
	I0120 11:22:50.254385  950344 main.go:141] libmachine: (addons-158281) Calling .GetMachineName
	I0120 11:22:50.254668  950344 main.go:141] libmachine: (addons-158281) Calling .GetIP
	I0120 11:22:50.257134  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.257481  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.257510  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.257730  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.260047  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.260354  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.260383  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.260518  950344 provision.go:143] copyHostCerts
	I0120 11:22:50.260598  950344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 11:22:50.260717  950344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 11:22:50.260793  950344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 11:22:50.260863  950344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.addons-158281 san=[127.0.0.1 192.168.39.113 addons-158281 localhost minikube]
	I0120 11:22:50.351047  950344 provision.go:177] copyRemoteCerts
	I0120 11:22:50.351105  950344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 11:22:50.351130  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.353273  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.353586  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.353616  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.353732  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.353922  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.354065  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.354236  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:22:50.433355  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 11:22:50.456824  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 11:22:50.477730  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 11:22:50.499413  950344 provision.go:87] duration metric: took 245.029353ms to configureAuth
	I0120 11:22:50.499441  950344 buildroot.go:189] setting minikube options for container-runtime
	I0120 11:22:50.499599  950344 config.go:182] Loaded profile config "addons-158281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:22:50.499685  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.502014  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.502357  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.502388  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.502612  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.502768  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.502926  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.503076  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.503238  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:50.503437  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:50.503452  950344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 11:22:50.705741  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 11:22:50.705774  950344 main.go:141] libmachine: Checking connection to Docker...
	I0120 11:22:50.705783  950344 main.go:141] libmachine: (addons-158281) Calling .GetURL
	I0120 11:22:50.707084  950344 main.go:141] libmachine: (addons-158281) DBG | using libvirt version 6000000
	I0120 11:22:50.709157  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.709508  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.709541  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.709759  950344 main.go:141] libmachine: Docker is up and running!
	I0120 11:22:50.709776  950344 main.go:141] libmachine: Reticulating splines...
	I0120 11:22:50.709784  950344 client.go:171] duration metric: took 24.367921443s to LocalClient.Create
	I0120 11:22:50.709814  950344 start.go:167] duration metric: took 24.367998078s to libmachine.API.Create "addons-158281"
	I0120 11:22:50.709830  950344 start.go:293] postStartSetup for "addons-158281" (driver="kvm2")
	I0120 11:22:50.709848  950344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 11:22:50.709873  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:50.710152  950344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 11:22:50.710187  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.712244  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.712543  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.712569  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.712706  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.712882  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.713029  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.713159  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:22:50.787412  950344 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 11:22:50.791064  950344 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 11:22:50.791082  950344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 11:22:50.791156  950344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 11:22:50.791182  950344 start.go:296] duration metric: took 81.342431ms for postStartSetup
	I0120 11:22:50.791218  950344 main.go:141] libmachine: (addons-158281) Calling .GetConfigRaw
	I0120 11:22:50.791748  950344 main.go:141] libmachine: (addons-158281) Calling .GetIP
	I0120 11:22:50.793947  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.794277  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.794307  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.794516  950344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/config.json ...
	I0120 11:22:50.794710  950344 start.go:128] duration metric: took 24.470949364s to createHost
	I0120 11:22:50.794737  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.796901  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.797253  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.797282  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.797451  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.797640  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.797805  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.797937  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.798097  950344 main.go:141] libmachine: Using SSH client type: native
	I0120 11:22:50.798264  950344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0120 11:22:50.798274  950344 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 11:22:50.894840  950344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737372170.870222439
	
	I0120 11:22:50.894864  950344 fix.go:216] guest clock: 1737372170.870222439
	I0120 11:22:50.894875  950344 fix.go:229] Guest: 2025-01-20 11:22:50.870222439 +0000 UTC Remote: 2025-01-20 11:22:50.794723871 +0000 UTC m=+24.575862907 (delta=75.498568ms)
	I0120 11:22:50.894905  950344 fix.go:200] guest clock delta is within tolerance: 75.498568ms
	I0120 11:22:50.894922  950344 start.go:83] releasing machines lock for "addons-158281", held for 24.571239679s
	I0120 11:22:50.894961  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:50.895236  950344 main.go:141] libmachine: (addons-158281) Calling .GetIP
	I0120 11:22:50.897649  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.897958  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.897978  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.898157  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:50.898607  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:50.898786  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:22:50.898872  950344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 11:22:50.898918  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.898977  950344 ssh_runner.go:195] Run: cat /version.json
	I0120 11:22:50.899004  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:22:50.901487  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.901752  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.901788  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.901820  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.901931  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.902076  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.902194  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:50.902207  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.902217  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:50.902333  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:22:50.902438  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:22:50.902590  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:22:50.902744  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:22:50.902883  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:22:51.001017  950344 ssh_runner.go:195] Run: systemctl --version
	I0120 11:22:51.006459  950344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 11:22:51.158242  950344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 11:22:51.164381  950344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 11:22:51.164440  950344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 11:22:51.179120  950344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 11:22:51.179143  950344 start.go:495] detecting cgroup driver to use...
	I0120 11:22:51.179200  950344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 11:22:51.193384  950344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 11:22:51.205760  950344 docker.go:217] disabling cri-docker service (if available) ...
	I0120 11:22:51.205808  950344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 11:22:51.218120  950344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 11:22:51.230499  950344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 11:22:51.341387  950344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 11:22:51.488115  950344 docker.go:233] disabling docker service ...
	I0120 11:22:51.488185  950344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 11:22:51.501123  950344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 11:22:51.512582  950344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 11:22:51.621773  950344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 11:22:51.721512  950344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 11:22:51.733559  950344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 11:22:51.749604  950344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 11:22:51.749667  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.758811  950344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 11:22:51.758875  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.768049  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.777335  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.786323  950344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 11:22:51.795562  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.804516  950344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.819328  950344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 11:22:51.828655  950344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 11:22:51.836766  950344 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 11:22:51.836809  950344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 11:22:51.849064  950344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 11:22:51.857521  950344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 11:22:51.956091  950344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 11:22:52.043996  950344 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 11:22:52.044094  950344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 11:22:52.048341  950344 start.go:563] Will wait 60s for crictl version
	I0120 11:22:52.048400  950344 ssh_runner.go:195] Run: which crictl
	I0120 11:22:52.051759  950344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 11:22:52.087953  950344 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 11:22:52.088033  950344 ssh_runner.go:195] Run: crio --version
	I0120 11:22:52.113906  950344 ssh_runner.go:195] Run: crio --version
	I0120 11:22:52.140589  950344 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 11:22:52.141957  950344 main.go:141] libmachine: (addons-158281) Calling .GetIP
	I0120 11:22:52.145037  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:52.145422  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:22:52.145446  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:22:52.145654  950344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 11:22:52.149360  950344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 11:22:52.160898  950344 kubeadm.go:883] updating cluster {Name:addons-158281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-158281 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 11:22:52.161009  950344 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 11:22:52.161063  950344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 11:22:52.191135  950344 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 11:22:52.191213  950344 ssh_runner.go:195] Run: which lz4
	I0120 11:22:52.194682  950344 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 11:22:52.198472  950344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 11:22:52.198500  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 11:22:53.289711  950344 crio.go:462] duration metric: took 1.095094628s to copy over tarball
	I0120 11:22:53.289786  950344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 11:22:55.395768  950344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105939048s)
	I0120 11:22:55.395805  950344 crio.go:469] duration metric: took 2.106059945s to extract the tarball
	I0120 11:22:55.395818  950344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 11:22:55.432042  950344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 11:22:55.470579  950344 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 11:22:55.470604  950344 cache_images.go:84] Images are preloaded, skipping loading
	I0120 11:22:55.470615  950344 kubeadm.go:934] updating node { 192.168.39.113 8443 v1.32.0 crio true true} ...
	I0120 11:22:55.470746  950344 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-158281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-158281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 11:22:55.470833  950344 ssh_runner.go:195] Run: crio config
	I0120 11:22:55.517585  950344 cni.go:84] Creating CNI manager for ""
	I0120 11:22:55.517609  950344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 11:22:55.517624  950344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 11:22:55.517655  950344 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-158281 NodeName:addons-158281 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 11:22:55.517819  950344 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-158281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 11:22:55.517950  950344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 11:22:55.527824  950344 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 11:22:55.527893  950344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 11:22:55.537503  950344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 11:22:55.552413  950344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 11:22:55.567160  950344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 11:22:55.582190  950344 ssh_runner.go:195] Run: grep 192.168.39.113	control-plane.minikube.internal$ /etc/hosts
	I0120 11:22:55.585668  950344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 11:22:55.596706  950344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 11:22:55.718087  950344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 11:22:55.733775  950344 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281 for IP: 192.168.39.113
	I0120 11:22:55.733803  950344 certs.go:194] generating shared ca certs ...
	I0120 11:22:55.733825  950344 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:55.733984  950344 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 11:22:55.910109  950344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt ...
	I0120 11:22:55.910143  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt: {Name:mke8fa0bd28bd5482e6a15215403bea3ab5218d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:55.910328  950344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key ...
	I0120 11:22:55.910346  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key: {Name:mka26c08e91cd7c86f5b5c5c7ca85e5c7690b440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:55.910455  950344 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 11:22:56.026564  950344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt ...
	I0120 11:22:56.026593  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt: {Name:mk960d1be2eda278ee0802817cfed52cd26a923f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.026754  950344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key ...
	I0120 11:22:56.026769  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key: {Name:mk84df8dde8a388db964eda279fbd02d15f67ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.026863  950344 certs.go:256] generating profile certs ...
	I0120 11:22:56.026938  950344 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.key
	I0120 11:22:56.026959  950344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt with IP's: []
	I0120 11:22:56.154210  950344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt ...
	I0120 11:22:56.154242  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: {Name:mke346ec4269e4e6b68127c5b897103ecded51c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.154437  950344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.key ...
	I0120 11:22:56.154453  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.key: {Name:mk3fee86695709625d0c538968990a04a89f1ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.154584  950344 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key.b944b2b6
	I0120 11:22:56.154611  950344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt.b944b2b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113]
	I0120 11:22:56.265193  950344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt.b944b2b6 ...
	I0120 11:22:56.265222  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt.b944b2b6: {Name:mk08378d5bc9843b2d2e69cac7c435168910ed57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.265375  950344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key.b944b2b6 ...
	I0120 11:22:56.265395  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key.b944b2b6: {Name:mkf4a1cb189a583e886f8f3ae5f096d4079a354c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.265494  950344 certs.go:381] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt.b944b2b6 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt
	I0120 11:22:56.265589  950344 certs.go:385] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key.b944b2b6 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key
	I0120 11:22:56.265658  950344 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.key
	I0120 11:22:56.265684  950344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.crt with IP's: []
	I0120 11:22:56.562265  950344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.crt ...
	I0120 11:22:56.562302  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.crt: {Name:mkfd1295a55b003ab15c13d4805aa74d945b9a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.562501  950344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.key ...
	I0120 11:22:56.562543  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.key: {Name:mkfce14abd5764e280fb7606991f46d07e1d349e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:22:56.562760  950344 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 11:22:56.562810  950344 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 11:22:56.562879  950344 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 11:22:56.562929  950344 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 11:22:56.563605  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 11:22:56.590964  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 11:22:56.613818  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 11:22:56.643142  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 11:22:56.663652  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 11:22:56.683885  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 11:22:56.703874  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 11:22:56.723973  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 11:22:56.743998  950344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 11:22:56.764038  950344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 11:22:56.778105  950344 ssh_runner.go:195] Run: openssl version
	I0120 11:22:56.783050  950344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 11:22:56.792415  950344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 11:22:56.796126  950344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 11:22:56.796164  950344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 11:22:56.801169  950344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 11:22:56.810280  950344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 11:22:56.813769  950344 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 11:22:56.813813  950344 kubeadm.go:392] StartCluster: {Name:addons-158281 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-158281 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:22:56.813880  950344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 11:22:56.813914  950344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 11:22:56.844543  950344 cri.go:89] found id: ""
	I0120 11:22:56.844604  950344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 11:22:56.853384  950344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 11:22:56.861829  950344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 11:22:56.870345  950344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 11:22:56.870361  950344 kubeadm.go:157] found existing configuration files:
	
	I0120 11:22:56.870407  950344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 11:22:56.878252  950344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 11:22:56.878299  950344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 11:22:56.886349  950344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 11:22:56.894157  950344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 11:22:56.894198  950344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 11:22:56.902377  950344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 11:22:56.910266  950344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 11:22:56.910327  950344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 11:22:56.918323  950344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 11:22:56.926430  950344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 11:22:56.926480  950344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 11:22:56.934746  950344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 11:22:56.985094  950344 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 11:22:56.985217  950344 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 11:22:57.076971  950344 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 11:22:57.077100  950344 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 11:22:57.077253  950344 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 11:22:57.084892  950344 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 11:22:57.087024  950344 out.go:235]   - Generating certificates and keys ...
	I0120 11:22:57.091615  950344 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 11:22:57.092610  950344 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 11:22:57.228464  950344 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 11:22:57.295097  950344 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 11:22:57.586985  950344 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 11:22:57.673289  950344 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 11:22:57.787824  950344 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 11:22:57.787967  950344 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-158281 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0120 11:22:57.934158  950344 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 11:22:57.934320  950344 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-158281 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0120 11:22:58.153507  950344 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 11:22:58.283017  950344 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 11:22:58.459000  950344 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 11:22:58.459104  950344 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 11:22:58.751207  950344 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 11:22:58.940252  950344 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 11:22:59.139593  950344 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 11:22:59.435283  950344 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 11:22:59.561938  950344 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 11:22:59.562482  950344 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 11:22:59.564759  950344 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 11:22:59.604172  950344 out.go:235]   - Booting up control plane ...
	I0120 11:22:59.604323  950344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 11:22:59.604442  950344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 11:22:59.604532  950344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 11:22:59.604660  950344 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 11:22:59.604812  950344 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 11:22:59.604885  950344 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 11:22:59.700999  950344 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 11:22:59.701199  950344 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 11:23:00.202051  950344 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.695779ms
	I0120 11:23:00.202214  950344 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 11:23:05.200635  950344 kubeadm.go:310] [api-check] The API server is healthy after 5.001614936s
	I0120 11:23:05.215996  950344 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 11:23:05.229428  950344 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 11:23:05.248192  950344 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 11:23:05.248453  950344 kubeadm.go:310] [mark-control-plane] Marking the node addons-158281 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 11:23:05.260789  950344 kubeadm.go:310] [bootstrap-token] Using token: 1s822t.z907d9p3v1y3txhu
	I0120 11:23:05.261885  950344 out.go:235]   - Configuring RBAC rules ...
	I0120 11:23:05.262045  950344 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 11:23:05.266087  950344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 11:23:05.271799  950344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 11:23:05.274653  950344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 11:23:05.277230  950344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 11:23:05.282920  950344 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 11:23:05.607532  950344 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 11:23:06.037448  950344 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 11:23:06.607181  950344 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 11:23:06.608067  950344 kubeadm.go:310] 
	I0120 11:23:06.608171  950344 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 11:23:06.608188  950344 kubeadm.go:310] 
	I0120 11:23:06.608289  950344 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 11:23:06.608298  950344 kubeadm.go:310] 
	I0120 11:23:06.608350  950344 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 11:23:06.608459  950344 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 11:23:06.608537  950344 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 11:23:06.608547  950344 kubeadm.go:310] 
	I0120 11:23:06.608622  950344 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 11:23:06.608636  950344 kubeadm.go:310] 
	I0120 11:23:06.608721  950344 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 11:23:06.608737  950344 kubeadm.go:310] 
	I0120 11:23:06.608786  950344 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 11:23:06.608889  950344 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 11:23:06.608982  950344 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 11:23:06.608993  950344 kubeadm.go:310] 
	I0120 11:23:06.609074  950344 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 11:23:06.609172  950344 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 11:23:06.609186  950344 kubeadm.go:310] 
	I0120 11:23:06.609267  950344 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1s822t.z907d9p3v1y3txhu \
	I0120 11:23:06.609360  950344 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 11:23:06.609380  950344 kubeadm.go:310] 	--control-plane 
	I0120 11:23:06.609386  950344 kubeadm.go:310] 
	I0120 11:23:06.609459  950344 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 11:23:06.609465  950344 kubeadm.go:310] 
	I0120 11:23:06.609535  950344 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1s822t.z907d9p3v1y3txhu \
	I0120 11:23:06.609685  950344 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 11:23:06.610464  950344 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 11:23:06.610497  950344 cni.go:84] Creating CNI manager for ""
	I0120 11:23:06.610512  950344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 11:23:06.612250  950344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 11:23:06.613594  950344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 11:23:06.625246  950344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 11:23:06.642114  950344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 11:23:06.642175  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:06.642221  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-158281 minikube.k8s.io/updated_at=2025_01_20T11_23_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=addons-158281 minikube.k8s.io/primary=true
	I0120 11:23:06.661470  950344 ops.go:34] apiserver oom_adj: -16
	I0120 11:23:06.790473  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:07.291279  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:07.791325  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:08.290629  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:08.790993  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:09.290666  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:09.790658  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:10.290605  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:10.791461  950344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 11:23:10.870952  950344 kubeadm.go:1113] duration metric: took 4.228831131s to wait for elevateKubeSystemPrivileges
	I0120 11:23:10.871001  950344 kubeadm.go:394] duration metric: took 14.057193335s to StartCluster
	I0120 11:23:10.871022  950344 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:23:10.871150  950344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:23:10.871507  950344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 11:23:10.871711  950344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 11:23:10.871738  950344 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0120 11:23:10.871713  950344 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 11:23:10.871810  950344 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-158281"
	I0120 11:23:10.871824  950344 addons.go:69] Setting yakd=true in profile "addons-158281"
	I0120 11:23:10.871834  950344 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-158281"
	I0120 11:23:10.871838  950344 addons.go:238] Setting addon yakd=true in "addons-158281"
	I0120 11:23:10.871867  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.871868  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.871863  950344 addons.go:69] Setting default-storageclass=true in profile "addons-158281"
	I0120 11:23:10.871879  950344 addons.go:69] Setting gcp-auth=true in profile "addons-158281"
	I0120 11:23:10.871891  950344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-158281"
	I0120 11:23:10.871909  950344 addons.go:69] Setting ingress=true in profile "addons-158281"
	I0120 11:23:10.871925  950344 addons.go:238] Setting addon ingress=true in "addons-158281"
	I0120 11:23:10.871967  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.871972  950344 config.go:182] Loaded profile config "addons-158281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:23:10.872046  950344 addons.go:69] Setting ingress-dns=true in profile "addons-158281"
	I0120 11:23:10.872059  950344 addons.go:238] Setting addon ingress-dns=true in "addons-158281"
	I0120 11:23:10.872086  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872127  950344 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-158281"
	I0120 11:23:10.872174  950344 addons.go:69] Setting storage-provisioner=true in profile "addons-158281"
	I0120 11:23:10.872191  950344 addons.go:69] Setting volcano=true in profile "addons-158281"
	I0120 11:23:10.872196  950344 addons.go:238] Setting addon storage-provisioner=true in "addons-158281"
	I0120 11:23:10.872202  950344 addons.go:238] Setting addon volcano=true in "addons-158281"
	I0120 11:23:10.872227  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872239  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872308  950344 addons.go:69] Setting cloud-spanner=true in profile "addons-158281"
	I0120 11:23:10.872327  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872327  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872342  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872380  950344 addons.go:69] Setting volumesnapshots=true in profile "addons-158281"
	I0120 11:23:10.872392  950344 addons.go:238] Setting addon volumesnapshots=true in "addons-158281"
	I0120 11:23:10.872396  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872413  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872477  950344 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-158281"
	I0120 11:23:10.872478  950344 addons.go:69] Setting registry=true in profile "addons-158281"
	I0120 11:23:10.872491  950344 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-158281"
	I0120 11:23:10.872497  950344 addons.go:238] Setting addon registry=true in "addons-158281"
	I0120 11:23:10.872509  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872526  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872679  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872706  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872713  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872727  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872749  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872781  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872783  950344 addons.go:69] Setting metrics-server=true in profile "addons-158281"
	I0120 11:23:10.872178  950344 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-158281"
	I0120 11:23:10.872796  950344 addons.go:238] Setting addon metrics-server=true in "addons-158281"
	I0120 11:23:10.872463  950344 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-158281"
	I0120 11:23:10.872811  950344 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-158281"
	I0120 11:23:10.872813  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872381  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872857  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.871900  950344 mustload.go:65] Loading cluster: addons-158281
	I0120 11:23:10.872880  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.872909  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.872333  950344 addons.go:238] Setting addon cloud-spanner=true in "addons-158281"
	I0120 11:23:10.873009  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.872469  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.871858  950344 addons.go:69] Setting inspektor-gadget=true in profile "addons-158281"
	I0120 11:23:10.873080  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873115  950344 addons.go:238] Setting addon inspektor-gadget=true in "addons-158281"
	I0120 11:23:10.873127  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.873210  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.873247  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873262  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.873295  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873345  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.873368  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873402  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.873433  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873615  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.873676  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.873723  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.874093  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.874139  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.879058  950344 out.go:177] * Verifying Kubernetes components...
	I0120 11:23:10.880631  950344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 11:23:10.893535  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0120 11:23:10.893708  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0120 11:23:10.893863  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0120 11:23:10.893984  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0120 11:23:10.894044  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0120 11:23:10.894165  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.894378  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.894513  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.894689  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.894710  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.894992  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.895101  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.895122  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.895143  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.895161  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.895173  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.895190  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.895636  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.895668  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.895685  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.895722  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.895796  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.895807  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.895918  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.896048  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.896217  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.902971  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.902992  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0120 11:23:10.903020  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.903084  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.903128  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.903162  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.903359  950344 config.go:182] Loaded profile config "addons-158281": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:23:10.903649  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.903686  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.904288  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.904330  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.902971  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.913712  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0120 11:23:10.913732  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.913855  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0120 11:23:10.914089  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.914138  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.914211  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.914324  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.914912  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.914931  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.914980  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.915003  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.915006  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.915310  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.915315  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.915603  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.915623  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.915850  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.915881  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.916016  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.916056  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.916160  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.919000  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.919042  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.925455  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0120 11:23:10.926071  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.926638  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.926660  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.927320  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.927539  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.928177  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0120 11:23:10.931800  950344 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-158281"
	I0120 11:23:10.931847  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.932197  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.932229  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.932456  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.932467  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0120 11:23:10.932669  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0120 11:23:10.933183  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.933199  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.933275  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.934286  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.934305  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.934374  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.934704  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.935120  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0120 11:23:10.935154  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.935260  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.935315  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.935571  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.936025  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.936046  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.936129  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.936639  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.936658  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.937051  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.937259  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.937325  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.938178  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.938753  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.938794  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.939267  950344 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0120 11:23:10.940971  950344 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 11:23:10.940995  950344 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 11:23:10.941023  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:10.941116  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.941478  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.941528  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.945197  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.945774  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:10.945800  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.945979  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:10.946143  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:10.946280  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:10.946427  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:10.948081  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0120 11:23:10.948724  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.949272  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.949292  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.949672  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.949845  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.951404  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0120 11:23:10.951516  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.951729  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.951866  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:10.951892  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:10.953773  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:10.953788  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:10.953797  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:10.953805  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:10.953946  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.953968  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.954044  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0120 11:23:10.954163  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:10.954174  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:10.954188  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 11:23:10.954284  950344 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0120 11:23:10.954442  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.954697  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.955012  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.955599  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.955626  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.955959  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.956112  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.958527  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.958713  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0120 11:23:10.959440  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.959463  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I0120 11:23:10.959908  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.959926  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.960327  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.960446  950344 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0120 11:23:10.960859  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.960898  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.961393  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.961986  950344 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 11:23:10.962008  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0120 11:23:10.962030  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:10.962102  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.962130  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.962699  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.963291  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.963329  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.964342  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0120 11:23:10.966337  950344 addons.go:238] Setting addon default-storageclass=true in "addons-158281"
	I0120 11:23:10.966384  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:10.966788  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.966823  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.967053  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:10.967065  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.967098  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:10.967116  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.967279  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:10.967468  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:10.967686  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:10.971013  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.972851  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0120 11:23:10.973516  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.973536  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.974007  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.974092  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.974353  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.975974  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.976933  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.976953  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.977201  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0120 11:23:10.977623  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.977725  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.977907  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0120 11:23:10.978367  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.978407  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.978701  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.978727  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.979004  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0120 11:23:10.979029  950344 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0120 11:23:10.979049  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:10.979673  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.979893  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.981834  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0120 11:23:10.983337  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.984110  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.984516  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:10.984540  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.984661  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.984675  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.984743  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:10.985030  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:10.985037  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.985291  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:10.985358  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.985549  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:10.986845  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.988506  950344 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0120 11:23:10.988710  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0120 11:23:10.989039  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.989578  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.989597  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.989963  950344 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 11:23:10.989981  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0120 11:23:10.990001  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:10.990012  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.990155  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.991037  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0120 11:23:10.991519  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.991980  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:10.992044  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.992063  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.992410  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.993240  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:10.993295  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:10.993726  950344 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0120 11:23:10.994791  950344 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0120 11:23:10.994814  950344 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0120 11:23:10.994833  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:10.995553  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0120 11:23:10.996331  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:10.997499  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:10.997519  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:10.998103  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:10.998451  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:10.998859  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:10.999349  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:10.999370  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.000215  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.000294  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41305
	I0120 11:23:11.000464  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.000591  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.000659  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.000865  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.001255  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.001268  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.001583  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0120 11:23:11.001750  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.002026  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.002098  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.003101  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.003120  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.003581  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.003856  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.004250  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.004531  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.004549  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.004590  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.004630  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.004633  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.004671  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.004933  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.004994  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.006138  950344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0120 11:23:11.006144  950344 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0120 11:23:11.007262  950344 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 11:23:11.007283  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0120 11:23:11.007301  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.007810  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0120 11:23:11.008359  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.008700  950344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 11:23:11.008863  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.008938  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.008959  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.009462  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0120 11:23:11.009849  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.010135  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0120 11:23:11.010400  950344 out.go:177]   - Using image docker.io/registry:2.8.3
	I0120 11:23:11.010572  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.010614  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.011521  950344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 11:23:11.011686  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.011701  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.012432  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.012639  950344 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0120 11:23:11.013014  950344 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 11:23:11.013034  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0120 11:23:11.013042  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.013052  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.013215  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.013361  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.013739  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.013764  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.013913  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.013968  950344 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0120 11:23:11.013983  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0120 11:23:11.014003  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.014187  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.014398  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.014562  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.014643  950344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 11:23:11.015139  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.015619  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0120 11:23:11.015789  950344 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 11:23:11.015806  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 11:23:11.015823  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.016014  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.016031  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.016475  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.016679  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.016778  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.017075  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:11.017130  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:11.017281  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.017296  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.017380  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.017402  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.017431  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.017586  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.017733  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.017803  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.017848  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.017866  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.018144  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.018273  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.018295  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.018510  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.018668  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.018714  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.018949  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.019158  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.019667  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.019818  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.020257  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.020277  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.020464  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.020633  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.020802  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.020967  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.021221  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0120 11:23:11.021272  950344 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0120 11:23:11.022771  950344 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0120 11:23:11.022792  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0120 11:23:11.022809  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.024046  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0120 11:23:11.025188  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0120 11:23:11.025642  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.026048  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.026078  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.026289  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.026479  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.026631  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.026769  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.027866  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0120 11:23:11.029039  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0120 11:23:11.030270  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0120 11:23:11.031287  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0120 11:23:11.031312  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0120 11:23:11.031692  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.032247  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.032271  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.032664  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.032899  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.033222  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0120 11:23:11.033439  950344 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0120 11:23:11.033622  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.034284  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.034309  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.034424  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0120 11:23:11.034444  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0120 11:23:11.034475  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.034816  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.034850  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.035012  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.036507  950344 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0120 11:23:11.036907  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.038111  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.038125  950344 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0120 11:23:11.038140  950344 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0120 11:23:11.038159  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.038198  950344 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0120 11:23:11.039141  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.039175  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.039437  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.039618  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.039876  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.040022  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.040114  950344 out.go:177]   - Using image docker.io/busybox:stable
	I0120 11:23:11.041375  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0120 11:23:11.041411  950344 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 11:23:11.041426  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0120 11:23:11.041443  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.041741  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:11.042158  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.042588  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.042608  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.042732  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:11.042744  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:11.042991  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.043183  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:11.043210  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.043330  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:11.043399  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.043529  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.044700  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:11.044767  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.045023  950344 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 11:23:11.045036  950344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 11:23:11.045050  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:11.045118  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.045134  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.045316  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.045465  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.045577  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.045694  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.047962  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.048313  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:11.048338  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:11.048468  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:11.048636  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:11.048773  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:11.048925  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:11.268547  950344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 11:23:11.268564  950344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 11:23:11.285809  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 11:23:11.299524  950344 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0120 11:23:11.299543  950344 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0120 11:23:11.326528  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 11:23:11.361559  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 11:23:11.377705  950344 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0120 11:23:11.377745  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0120 11:23:11.415158  950344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 11:23:11.415190  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0120 11:23:11.429841  950344 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0120 11:23:11.429867  950344 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0120 11:23:11.440130  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0120 11:23:11.441396  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 11:23:11.454003  950344 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0120 11:23:11.454024  950344 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0120 11:23:11.460065  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 11:23:11.476027  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 11:23:11.488071  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 11:23:11.498837  950344 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0120 11:23:11.498864  950344 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0120 11:23:11.500416  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0120 11:23:11.500431  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0120 11:23:11.525544  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0120 11:23:11.563347  950344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 11:23:11.563378  950344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 11:23:11.598079  950344 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0120 11:23:11.598112  950344 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0120 11:23:11.632553  950344 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0120 11:23:11.632586  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0120 11:23:11.634440  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0120 11:23:11.634466  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0120 11:23:11.644344  950344 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0120 11:23:11.644360  950344 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0120 11:23:11.715969  950344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 11:23:11.716005  950344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 11:23:11.755809  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0120 11:23:11.767661  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0120 11:23:11.767690  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0120 11:23:11.805827  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0120 11:23:11.805854  950344 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0120 11:23:11.825894  950344 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0120 11:23:11.825933  950344 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0120 11:23:11.868562  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 11:23:11.900471  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0120 11:23:11.900503  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0120 11:23:11.968391  950344 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 11:23:11.968418  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0120 11:23:11.976807  950344 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0120 11:23:11.976833  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0120 11:23:12.103122  950344 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0120 11:23:12.103152  950344 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0120 11:23:12.152419  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 11:23:12.309784  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0120 11:23:12.374599  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0120 11:23:12.374632  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0120 11:23:12.666655  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0120 11:23:12.666689  950344 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0120 11:23:12.897754  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0120 11:23:12.897789  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0120 11:23:12.957667  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0120 11:23:12.957697  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0120 11:23:13.249250  950344 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 11:23:13.249288  950344 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0120 11:23:13.371415  950344 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.102825893s)
	I0120 11:23:13.372438  950344 node_ready.go:35] waiting up to 6m0s for node "addons-158281" to be "Ready" ...
	I0120 11:23:13.372574  950344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103974769s)
	I0120 11:23:13.372622  950344 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 11:23:13.377570  950344 node_ready.go:49] node "addons-158281" has status "Ready":"True"
	I0120 11:23:13.377592  950344 node_ready.go:38] duration metric: took 5.126049ms for node "addons-158281" to be "Ready" ...
	I0120 11:23:13.377606  950344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 11:23:13.397675  950344 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:13.514237  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 11:23:13.879741  950344 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-158281" context rescaled to 1 replicas
	I0120 11:23:14.485615  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.199755418s)
	I0120 11:23:14.485693  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:14.485707  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:14.486042  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:14.486133  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:14.486150  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:14.486165  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:14.486175  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:14.486412  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:14.486431  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:14.486442  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:15.418675  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:17.462719  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:17.787710  950344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0120 11:23:17.787760  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:17.791031  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:17.791545  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:17.791581  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:17.791823  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:17.792121  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:17.792342  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:17.792590  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:18.288126  950344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0120 11:23:18.446157  950344 addons.go:238] Setting addon gcp-auth=true in "addons-158281"
	I0120 11:23:18.446217  950344 host.go:66] Checking if "addons-158281" exists ...
	I0120 11:23:18.446557  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:18.446607  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:18.463073  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0120 11:23:18.463711  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:18.464372  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:18.464402  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:18.464823  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:18.465399  950344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:23:18.465445  950344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:23:18.481130  950344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0120 11:23:18.481673  950344 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:23:18.482165  950344 main.go:141] libmachine: Using API Version  1
	I0120 11:23:18.482185  950344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:23:18.482595  950344 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:23:18.482808  950344 main.go:141] libmachine: (addons-158281) Calling .GetState
	I0120 11:23:18.484602  950344 main.go:141] libmachine: (addons-158281) Calling .DriverName
	I0120 11:23:18.484829  950344 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0120 11:23:18.484858  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHHostname
	I0120 11:23:18.487561  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:18.487945  950344 main.go:141] libmachine: (addons-158281) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:42:b5", ip: ""} in network mk-addons-158281: {Iface:virbr1 ExpiryTime:2025-01-20 12:22:41 +0000 UTC Type:0 Mac:52:54:00:ea:42:b5 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-158281 Clientid:01:52:54:00:ea:42:b5}
	I0120 11:23:18.487972  950344 main.go:141] libmachine: (addons-158281) DBG | domain addons-158281 has defined IP address 192.168.39.113 and MAC address 52:54:00:ea:42:b5 in network mk-addons-158281
	I0120 11:23:18.488199  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHPort
	I0120 11:23:18.488393  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHKeyPath
	I0120 11:23:18.488579  950344 main.go:141] libmachine: (addons-158281) Calling .GetSSHUsername
	I0120 11:23:18.488739  950344 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/addons-158281/id_rsa Username:docker}
	I0120 11:23:18.613682  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.287100141s)
	I0120 11:23:18.613746  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.613760  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.613783  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.252189148s)
	I0120 11:23:18.613830  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.613846  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.613884  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.173726676s)
	I0120 11:23:18.613907  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.613916  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.613962  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.172540055s)
	I0120 11:23:18.613989  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.153906513s)
	I0120 11:23:18.614007  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614020  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614009  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614103  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614121  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614128  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.1260202s)
	I0120 11:23:18.614082  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.13803531s)
	I0120 11:23:18.614135  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614147  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614148  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614153  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614157  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614163  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614155  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614194  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614205  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614214  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614222  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614260  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.614270  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.088702444s)
	I0120 11:23:18.614291  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614296  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614313  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614319  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.858479889s)
	I0120 11:23:18.614324  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614328  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614339  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614346  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614300  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614356  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614347  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614403  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614345  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614502  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.745903697s)
	I0120 11:23:18.614546  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614554  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614651  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.614674  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.614702  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614713  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614731  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614741  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614762  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.304945007s)
	I0120 11:23:18.614783  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614794  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614796  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.614821  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614831  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614839  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.614846  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614880  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.614899  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614906  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614907  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.614915  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.614917  950344 addons.go:479] Verifying addon ingress=true in "addons-158281"
	I0120 11:23:18.615125  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.615154  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.615161  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.615168  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.615175  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.616272  950344 out.go:177] * Verifying ingress addon...
	I0120 11:23:18.618429  950344 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0120 11:23:18.618711  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.618742  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.618749  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.618757  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.618763  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.614703  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.462245916s)
	I0120 11:23:18.618812  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	W0120 11:23:18.618839  950344 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 11:23:18.618856  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.618864  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.618865  950344 retry.go:31] will retry after 255.869194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 11:23:18.619050  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619089  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619104  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619278  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619316  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619325  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619545  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619571  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619577  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619614  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619643  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619650  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619657  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.619663  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.619734  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619747  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619767  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619772  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619779  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.619786  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.619808  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619816  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619832  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.619860  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.619866  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.619873  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.619878  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.620307  950344 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-158281 service yakd-dashboard -n yakd-dashboard
	
	I0120 11:23:18.622914  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.622943  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.622955  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.622956  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.622965  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.622975  950344 addons.go:479] Verifying addon metrics-server=true in "addons-158281"
	I0120 11:23:18.623013  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.623022  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.623030  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.623036  950344 addons.go:479] Verifying addon registry=true in "addons-158281"
	I0120 11:23:18.623404  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.623465  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.623482  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.624754  950344 out.go:177] * Verifying registry addon...
	I0120 11:23:18.629474  950344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0120 11:23:18.632191  950344 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0120 11:23:18.632210  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:18.641099  950344 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 11:23:18.641116  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:18.652410  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.652432  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.652776  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.652794  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 11:23:18.652962  950344 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0120 11:23:18.657807  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:18.657833  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:18.658191  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:18.658214  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:18.658229  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:18.875531  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 11:23:19.125072  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:19.133143  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:19.635560  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:19.644118  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:19.920321  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:20.180806  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:20.181054  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:20.182091  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.667814911s)
	I0120 11:23:20.182141  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:20.182161  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:20.182161  950344 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.697308105s)
	I0120 11:23:20.182431  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:20.182444  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:20.182454  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:20.182460  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:20.182721  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:20.182727  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:20.182739  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:20.182750  950344 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-158281"
	I0120 11:23:20.183827  950344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 11:23:20.184603  950344 out.go:177] * Verifying csi-hostpath-driver addon...
	I0120 11:23:20.186031  950344 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0120 11:23:20.186850  950344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0120 11:23:20.187089  950344 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0120 11:23:20.187107  950344 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0120 11:23:20.235720  950344 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0120 11:23:20.235752  950344 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0120 11:23:20.271050  950344 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 11:23:20.271083  950344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0120 11:23:20.272201  950344 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 11:23:20.272220  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:20.381094  950344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 11:23:20.595208  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.719607931s)
	I0120 11:23:20.595300  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:20.595321  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:20.595622  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:20.595664  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:20.595673  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:20.595689  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:20.595698  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:20.596057  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:20.596090  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:20.596111  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:20.624164  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:20.633416  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:20.692332  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:21.216837  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:21.216862  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:21.217109  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:21.468013  950344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.086864195s)
	I0120 11:23:21.468122  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:21.468146  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:21.468558  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:21.468560  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:21.468576  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:21.468592  950344 main.go:141] libmachine: Making call to close driver server
	I0120 11:23:21.468601  950344 main.go:141] libmachine: (addons-158281) Calling .Close
	I0120 11:23:21.468833  950344 main.go:141] libmachine: Successfully made call to close driver server
	I0120 11:23:21.468862  950344 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 11:23:21.468863  950344 main.go:141] libmachine: (addons-158281) DBG | Closing plugin on server side
	I0120 11:23:21.469841  950344 addons.go:479] Verifying addon gcp-auth=true in "addons-158281"
	I0120 11:23:21.471672  950344 out.go:177] * Verifying gcp-auth addon...
	I0120 11:23:21.473732  950344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0120 11:23:21.484600  950344 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0120 11:23:21.484617  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:21.624297  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:21.632249  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:21.692328  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:21.977903  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:22.122379  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:22.133207  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:22.191881  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:22.410319  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:22.477479  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:22.622969  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:22.633109  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:22.690929  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:22.977072  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:23.122512  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:23.132883  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:23.191272  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:23.478109  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:23.622690  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:23.632797  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:23.691326  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:23.978033  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:24.122946  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:24.133174  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:24.191688  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:24.477206  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:24.622748  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:24.632684  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:24.691690  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:24.903505  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:24.977351  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:25.123629  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:25.136203  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:25.191936  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:25.499208  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:25.622358  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:25.632287  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:25.690733  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:25.977724  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:26.123069  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:26.133746  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:26.190918  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:26.478018  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:26.624573  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:26.633142  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:26.691387  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:26.905530  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:26.985822  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:27.123062  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:27.132724  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:27.190574  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:27.476617  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:27.622802  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:27.632719  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:27.690885  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:27.977200  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:28.122751  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:28.132776  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:28.190901  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:28.476427  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:28.622861  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:28.632808  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:28.690994  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:28.976787  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:29.122036  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:29.133184  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:29.191248  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:29.403471  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:29.477561  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:29.623803  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:29.632809  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:29.691384  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:29.978098  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:30.123275  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:30.133563  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:30.192805  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:30.477424  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:30.623637  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:30.633282  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:30.692086  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:30.978128  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:31.123341  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:31.133355  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:31.193795  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:31.477821  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:31.623878  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:31.632372  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:31.691264  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:31.902773  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:31.977608  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:32.122851  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:32.133176  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:32.191677  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:32.477372  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:32.623461  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:32.632573  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:32.691524  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:32.977170  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:33.122620  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:33.132646  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:33.190864  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:33.477002  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:33.622180  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:33.633405  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:33.690617  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:33.903513  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:33.977014  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:34.122677  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:34.132983  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:34.191226  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:34.477653  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:34.623533  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:34.633820  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:34.693500  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:34.976633  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:35.123334  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:35.132977  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:35.191036  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:35.476913  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:35.622323  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:35.632478  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:35.690815  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:36.182480  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:36.184664  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:36.185037  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:36.186568  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:36.191065  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:36.476755  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:36.622281  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:36.632254  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:36.690401  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:36.977045  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:37.126758  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:37.132882  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:37.191587  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:37.710144  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:37.711312  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:37.711479  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:37.711926  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:37.977701  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:38.122120  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:38.133413  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:38.191092  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:38.403732  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:38.477205  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:38.623198  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:38.632765  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:38.691232  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:38.977711  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:39.123150  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:39.133140  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:39.191502  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:39.478136  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:39.622624  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:39.632983  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:39.691866  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:39.976479  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:40.126897  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:40.132782  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:40.190539  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:40.477125  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:40.622129  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:40.633132  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:40.691573  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:40.903657  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:40.977112  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:41.122918  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:41.132994  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:41.191130  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:41.481257  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:41.622452  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:41.632158  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:41.691702  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:41.976939  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:42.122876  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:42.132692  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:42.190854  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:42.477821  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:42.622571  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:42.633206  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:42.691295  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:42.903701  950344 pod_ready.go:103] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"False"
	I0120 11:23:42.977322  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:43.122510  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:43.133595  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:43.190665  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:43.476409  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:43.622910  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:43.633470  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:43.690179  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:43.905705  950344 pod_ready.go:93] pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:43.905728  950344 pod_ready.go:82] duration metric: took 30.508020907s for pod "amd-gpu-device-plugin-xcklk" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.905737  950344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5zsqd" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.911425  950344 pod_ready.go:93] pod "coredns-668d6bf9bc-5zsqd" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:43.911453  950344 pod_ready.go:82] duration metric: took 5.708865ms for pod "coredns-668d6bf9bc-5zsqd" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.911465  950344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-t6ccj" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.913750  950344 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-t6ccj" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t6ccj" not found
	I0120 11:23:43.913767  950344 pod_ready.go:82] duration metric: took 2.296511ms for pod "coredns-668d6bf9bc-t6ccj" in "kube-system" namespace to be "Ready" ...
	E0120 11:23:43.913776  950344 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-t6ccj" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t6ccj" not found
	I0120 11:23:43.913783  950344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.918827  950344 pod_ready.go:93] pod "etcd-addons-158281" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:43.918845  950344 pod_ready.go:82] duration metric: took 5.056471ms for pod "etcd-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.918858  950344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.922326  950344 pod_ready.go:93] pod "kube-apiserver-addons-158281" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:43.922356  950344 pod_ready.go:82] duration metric: took 3.489178ms for pod "kube-apiserver-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.922367  950344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:43.976175  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:44.101294  950344 pod_ready.go:93] pod "kube-controller-manager-addons-158281" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:44.101318  950344 pod_ready.go:82] duration metric: took 178.943706ms for pod "kube-controller-manager-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.101328  950344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8666g" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.121938  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:44.132567  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:44.192288  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:44.477405  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:44.501270  950344 pod_ready.go:93] pod "kube-proxy-8666g" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:44.501297  950344 pod_ready.go:82] duration metric: took 399.961274ms for pod "kube-proxy-8666g" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.501312  950344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.622864  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:44.632953  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:44.692114  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:44.902692  950344 pod_ready.go:93] pod "kube-scheduler-addons-158281" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:44.902718  950344 pod_ready.go:82] duration metric: took 401.397813ms for pod "kube-scheduler-addons-158281" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.902728  950344 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qwbjn" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:44.978148  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:45.123820  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:45.134141  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:45.192126  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:45.302314  950344 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qwbjn" in "kube-system" namespace has status "Ready":"True"
	I0120 11:23:45.302349  950344 pod_ready.go:82] duration metric: took 399.608974ms for pod "nvidia-device-plugin-daemonset-qwbjn" in "kube-system" namespace to be "Ready" ...
	I0120 11:23:45.302363  950344 pod_ready.go:39] duration metric: took 31.924742927s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 11:23:45.302387  950344 api_server.go:52] waiting for apiserver process to appear ...
	I0120 11:23:45.302448  950344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:23:45.320120  950344 api_server.go:72] duration metric: took 34.448287838s to wait for apiserver process to appear ...
	I0120 11:23:45.320150  950344 api_server.go:88] waiting for apiserver healthz status ...
	I0120 11:23:45.320173  950344 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I0120 11:23:45.325069  950344 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I0120 11:23:45.325946  950344 api_server.go:141] control plane version: v1.32.0
	I0120 11:23:45.325974  950344 api_server.go:131] duration metric: took 5.817589ms to wait for apiserver health ...
	I0120 11:23:45.325983  950344 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 11:23:45.476983  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:45.508157  950344 system_pods.go:59] 18 kube-system pods found
	I0120 11:23:45.508186  950344 system_pods.go:61] "amd-gpu-device-plugin-xcklk" [f89f7dce-db84-44de-8f83-c1a9b81a5ac8] Running
	I0120 11:23:45.508191  950344 system_pods.go:61] "coredns-668d6bf9bc-5zsqd" [c72f4d6a-7287-441e-9918-8a4db07ca695] Running
	I0120 11:23:45.508198  950344 system_pods.go:61] "csi-hostpath-attacher-0" [83df8c8e-e2b2-42ae-9444-8eae3b349fbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 11:23:45.508204  950344 system_pods.go:61] "csi-hostpath-resizer-0" [be4b0ad7-05a4-47c2-8407-a6fbd9f55a17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 11:23:45.508211  950344 system_pods.go:61] "csi-hostpathplugin-wfjj7" [09928bb3-eae6-47bc-8f67-45cb2bda653a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 11:23:45.508215  950344 system_pods.go:61] "etcd-addons-158281" [41a6d1d2-5fe4-4847-b2bf-c739ca954421] Running
	I0120 11:23:45.508219  950344 system_pods.go:61] "kube-apiserver-addons-158281" [1ca6a3ef-2b85-4764-83c3-72b27d358e63] Running
	I0120 11:23:45.508222  950344 system_pods.go:61] "kube-controller-manager-addons-158281" [e85dfd36-cb37-4ab5-be6c-88f29e4f8227] Running
	I0120 11:23:45.508226  950344 system_pods.go:61] "kube-ingress-dns-minikube" [d31ebc36-32f0-4fde-bd54-6a98f1c9f971] Running
	I0120 11:23:45.508229  950344 system_pods.go:61] "kube-proxy-8666g" [c08ec316-b79d-4820-8586-73c10c289d0f] Running
	I0120 11:23:45.508232  950344 system_pods.go:61] "kube-scheduler-addons-158281" [d77e34be-20f9-41aa-be5e-e981abe1f2f4] Running
	I0120 11:23:45.508237  950344 system_pods.go:61] "metrics-server-7fbb699795-kg4cd" [47516766-d1e7-492c-b56e-4ee032ec8b3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 11:23:45.508242  950344 system_pods.go:61] "nvidia-device-plugin-daemonset-qwbjn" [22f8389b-4a08-44b4-8bf5-4052d2b93153] Running
	I0120 11:23:45.508247  950344 system_pods.go:61] "registry-6c86875c6f-hrzzv" [429b7809-2f4f-4e55-af7f-3ecbbf87557d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 11:23:45.508252  950344 system_pods.go:61] "registry-proxy-whl4v" [bdd62f98-726c-40c6-a3c6-fa45328ca334] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 11:23:45.508259  950344 system_pods.go:61] "snapshot-controller-68b874b76f-mvzbt" [946601d8-93c2-4dad-9121-12b02f2d86aa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 11:23:45.508267  950344 system_pods.go:61] "snapshot-controller-68b874b76f-n5k9g" [bc76a6b4-edc2-4120-95c7-c1752c6fb852] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 11:23:45.508271  950344 system_pods.go:61] "storage-provisioner" [18b4f3d7-4d3f-45a1-8a8b-85631889c59a] Running
	I0120 11:23:45.508278  950344 system_pods.go:74] duration metric: took 182.288856ms to wait for pod list to return data ...
	I0120 11:23:45.508285  950344 default_sa.go:34] waiting for default service account to be created ...
	I0120 11:23:45.622369  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:45.632949  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:45.691221  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:45.701476  950344 default_sa.go:45] found service account: "default"
	I0120 11:23:45.701502  950344 default_sa.go:55] duration metric: took 193.207612ms for default service account to be created ...
	I0120 11:23:45.701514  950344 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 11:23:45.910894  950344 system_pods.go:87] 18 kube-system pods found
	I0120 11:23:45.976597  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:46.105777  950344 system_pods.go:105] "amd-gpu-device-plugin-xcklk" [f89f7dce-db84-44de-8f83-c1a9b81a5ac8] Running
	I0120 11:23:46.105803  950344 system_pods.go:105] "coredns-668d6bf9bc-5zsqd" [c72f4d6a-7287-441e-9918-8a4db07ca695] Running
	I0120 11:23:46.105814  950344 system_pods.go:105] "csi-hostpath-attacher-0" [83df8c8e-e2b2-42ae-9444-8eae3b349fbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 11:23:46.105821  950344 system_pods.go:105] "csi-hostpath-resizer-0" [be4b0ad7-05a4-47c2-8407-a6fbd9f55a17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 11:23:46.105831  950344 system_pods.go:105] "csi-hostpathplugin-wfjj7" [09928bb3-eae6-47bc-8f67-45cb2bda653a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 11:23:46.105838  950344 system_pods.go:105] "etcd-addons-158281" [41a6d1d2-5fe4-4847-b2bf-c739ca954421] Running
	I0120 11:23:46.105844  950344 system_pods.go:105] "kube-apiserver-addons-158281" [1ca6a3ef-2b85-4764-83c3-72b27d358e63] Running
	I0120 11:23:46.105849  950344 system_pods.go:105] "kube-controller-manager-addons-158281" [e85dfd36-cb37-4ab5-be6c-88f29e4f8227] Running
	I0120 11:23:46.105854  950344 system_pods.go:105] "kube-ingress-dns-minikube" [d31ebc36-32f0-4fde-bd54-6a98f1c9f971] Running
	I0120 11:23:46.105858  950344 system_pods.go:105] "kube-proxy-8666g" [c08ec316-b79d-4820-8586-73c10c289d0f] Running
	I0120 11:23:46.105863  950344 system_pods.go:105] "kube-scheduler-addons-158281" [d77e34be-20f9-41aa-be5e-e981abe1f2f4] Running
	I0120 11:23:46.105870  950344 system_pods.go:105] "metrics-server-7fbb699795-kg4cd" [47516766-d1e7-492c-b56e-4ee032ec8b3f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 11:23:46.105874  950344 system_pods.go:105] "nvidia-device-plugin-daemonset-qwbjn" [22f8389b-4a08-44b4-8bf5-4052d2b93153] Running
	I0120 11:23:46.105880  950344 system_pods.go:105] "registry-6c86875c6f-hrzzv" [429b7809-2f4f-4e55-af7f-3ecbbf87557d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 11:23:46.105886  950344 system_pods.go:105] "registry-proxy-whl4v" [bdd62f98-726c-40c6-a3c6-fa45328ca334] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 11:23:46.105898  950344 system_pods.go:105] "snapshot-controller-68b874b76f-mvzbt" [946601d8-93c2-4dad-9121-12b02f2d86aa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 11:23:46.105906  950344 system_pods.go:105] "snapshot-controller-68b874b76f-n5k9g" [bc76a6b4-edc2-4120-95c7-c1752c6fb852] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 11:23:46.105911  950344 system_pods.go:105] "storage-provisioner" [18b4f3d7-4d3f-45a1-8a8b-85631889c59a] Running
	I0120 11:23:46.105920  950344 system_pods.go:147] duration metric: took 404.398754ms to wait for k8s-apps to be running ...
	I0120 11:23:46.105931  950344 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 11:23:46.105981  950344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:23:46.122586  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:46.122800  950344 system_svc.go:56] duration metric: took 16.859041ms WaitForService to wait for kubelet
	I0120 11:23:46.122825  950344 kubeadm.go:582] duration metric: took 35.250998538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 11:23:46.122841  950344 node_conditions.go:102] verifying NodePressure condition ...
	I0120 11:23:46.132557  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:46.190842  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:46.302337  950344 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 11:23:46.302376  950344 node_conditions.go:123] node cpu capacity is 2
	I0120 11:23:46.302391  950344 node_conditions.go:105] duration metric: took 179.543473ms to run NodePressure ...
	I0120 11:23:46.302403  950344 start.go:241] waiting for startup goroutines ...
	I0120 11:23:46.480909  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:46.623723  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:46.633098  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:46.692010  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:46.977535  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:47.122472  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:47.132454  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:47.190169  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:47.478064  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:47.622709  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:47.632603  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:47.690469  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:47.977697  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:48.123038  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:48.132937  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:48.191087  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:48.477981  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:48.622178  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:48.633511  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:48.691198  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:48.977459  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:49.122894  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:49.132722  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:49.191065  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:49.477414  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:49.623139  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:49.633212  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:49.691662  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:49.979504  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:50.123233  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:50.132536  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 11:23:50.191536  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:50.477675  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:50.622041  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:50.633199  950344 kapi.go:107] duration metric: took 32.003721928s to wait for kubernetes.io/minikube-addons=registry ...
	I0120 11:23:50.691579  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:50.977216  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:51.122459  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:51.193121  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:51.476940  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:51.622727  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:51.691698  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:51.977509  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:52.123439  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:52.191240  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:52.478793  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:52.623341  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:52.692250  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:52.983640  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:53.122708  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:53.191341  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:53.477950  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:53.643568  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:53.734247  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:53.977263  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:54.127308  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:54.190813  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:54.477462  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:54.624433  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:54.691260  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:54.977628  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:55.122919  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:55.191095  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:55.477878  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:55.623043  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:55.691536  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:55.977452  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:56.121897  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:56.192472  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:56.477354  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:56.623688  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:56.691608  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:56.977038  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:57.122666  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:57.191119  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:57.477003  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:57.622618  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:57.691212  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:57.977780  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:58.123359  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:58.224535  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:58.476881  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:58.623028  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:58.691841  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:58.977121  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:59.122702  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:59.190781  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:59.476997  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:23:59.625062  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:23:59.691317  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:23:59.978187  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:00.123830  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:00.192117  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:00.478292  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:00.623654  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:00.691852  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:00.977786  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:01.122811  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:01.190927  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:01.477019  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:01.622428  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:01.691928  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:01.977458  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:02.123850  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:02.191658  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:02.477958  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:02.622169  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:02.691767  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:02.976780  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:03.122177  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:03.191159  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:03.477545  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:03.623205  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:03.692164  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:03.977002  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:04.122752  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:04.191147  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:04.758714  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:04.759840  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:04.862293  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:04.977444  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:05.123048  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:05.191757  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:05.478598  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:05.623679  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:05.691572  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:05.977747  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:06.122792  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:06.198043  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:06.477115  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:06.623107  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:06.691234  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:06.977520  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:07.123055  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:07.190725  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:07.477134  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:07.624567  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:07.690716  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:07.977035  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:08.122512  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:08.190513  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:08.477728  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:08.623560  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:08.691477  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:08.976876  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:09.122576  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:09.192608  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:09.477098  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:09.623859  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:09.691384  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:09.977088  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:10.122460  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:10.191858  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:10.546872  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:10.624215  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:10.727704  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:10.977119  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:11.123382  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:11.192160  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:11.477185  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:11.623201  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:11.691608  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:11.977705  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:12.122914  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:12.192023  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:12.478341  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:12.624080  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:12.724471  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:12.977768  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:13.123502  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:13.191659  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:13.476941  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:13.622230  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:13.691538  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:13.976638  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:14.123004  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:14.191596  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:14.481882  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:14.622405  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:14.691803  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:14.977028  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:15.122738  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:15.191410  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:15.476925  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:15.624066  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:15.691439  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:15.978451  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:16.125871  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:16.194833  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:16.477454  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:16.623354  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:16.691159  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:16.977793  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:17.122058  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:17.191850  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:17.478130  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:17.622700  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:17.833518  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:17.977083  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:18.123020  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:18.190634  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:18.476660  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:18.623184  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:18.691154  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:18.977475  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:19.123187  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:19.191025  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:19.480567  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:19.624037  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:19.724529  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:19.977958  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:20.123433  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:20.191685  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:20.476713  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:20.622845  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:20.723470  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:20.978458  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:21.122482  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:21.191016  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:21.476856  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:21.622804  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:21.691937  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:21.977272  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:22.125132  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:22.235869  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:22.477612  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:22.623380  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:22.691238  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:22.977757  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:23.124175  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:23.191091  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:23.477534  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:23.623394  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:23.691462  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:23.976761  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:24.122049  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:24.191294  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:24.477132  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:24.623071  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:24.691277  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:25.235276  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:25.235570  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:25.235754  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:25.478315  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:25.624001  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:25.691461  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:25.977540  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:26.122350  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:26.191500  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:26.478889  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:26.622472  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:26.691876  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:26.976941  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:27.122423  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:27.191985  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:27.476966  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:27.625494  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:27.691932  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:28.246841  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:28.247301  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:28.247329  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:28.481379  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:28.623436  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:28.691648  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:28.976917  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:29.121979  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:29.190909  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:29.477286  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:29.623745  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:29.724450  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:29.985266  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:30.124654  950344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 11:24:30.191665  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:30.498215  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:30.622728  950344 kapi.go:107] duration metric: took 1m12.004295212s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0120 11:24:30.729096  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:31.259764  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:31.264037  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:31.476913  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:31.691480  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:31.976985  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:32.191758  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:32.477047  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:32.690642  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:32.976762  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:33.196475  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:33.477601  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:33.691394  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:33.977640  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:34.192328  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:34.477240  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:34.691748  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:34.977381  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:35.190869  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:35.477821  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 11:24:35.692696  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:35.976895  950344 kapi.go:107] duration metric: took 1m14.503154986s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0120 11:24:35.979153  950344 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-158281 cluster.
	I0120 11:24:35.980583  950344 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0120 11:24:35.981988  950344 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0120 11:24:36.191733  950344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 11:24:36.691773  950344 kapi.go:107] duration metric: took 1m16.504915534s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0120 11:24:36.693304  950344 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, yakd, inspektor-gadget, metrics-server, amd-gpu-device-plugin, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0120 11:24:36.694654  950344 addons.go:514] duration metric: took 1m25.822913974s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner yakd inspektor-gadget metrics-server amd-gpu-device-plugin default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0120 11:24:36.694715  950344 start.go:246] waiting for cluster config update ...
	I0120 11:24:36.694745  950344 start.go:255] writing updated cluster config ...
	I0120 11:24:36.695100  950344 ssh_runner.go:195] Run: rm -f paused
	I0120 11:24:36.749599  950344 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 11:24:36.751477  950344 out.go:177] * Done! kubectl is now configured to use "addons-158281" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.897085461Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b7511c0bdf4acc678460508acfff5cb16a7536626d58e06e0cbe5250f46d7a38,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-xk4hg,Uid:ed5738f4-dab1-4420-ab1e-03c7501e608f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372463967871918,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-xk4hg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed5738f4-dab1-4420-ab1e-03c7501e608f,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:27:43.660353907Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:865f1d0c9c9f442e6bf57115c905cfea1c9b860ba4e285e5f6926cca8da45cce,Metadata:&PodSandboxMetadata{Name:nginx,Uid:aa058638-8c52-452d-80ca-c0225f49ce0e,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1737372316645652163,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa058638-8c52-452d-80ca-c0225f49ce0e,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:25:16.335089163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a97413a9df6a333ccb612e49630ef77d61586249736488f811a7f690ff7091a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ffeb7439-f2bd-4e16-b03c-51ac6665b4f0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372277640280573,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffeb7439-f2bd-4e16-b03c-51ac6665b4f0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:24:37.331535015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a0f66ac1d44c3aaa17bd
d610a4c163b77020883ccafe60f11d71ff44cea4a2a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-kcqnd,Uid:1e127c71-0293-4cd1-81b0-99a0f7808dff,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372262692016348,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-kcqnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e127c71-0293-4cd1-81b0-99a0f7808dff,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:18.475676889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9dc5dc2a7b9538f51671b4bf4fd88f7e5efb5c69225707190945fdf0d08d2b4e,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-7xdlp,Uid:02f2df92-09ab-4589-8337-28b1b6c2c834,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1737372198875859152,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 90e03033-4c5f-45c1-b0b4-5e18a937ae91,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 90e03033-4c5f-45c1-b0b4-5e18a937ae91,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xdlp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02f2df92-09ab-4589-8337-28b1b6c2c834,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:18.556423594Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbc1c03e6db48b40bc346916439418cf8cff0964bf3c0eb41505f9805a94db45,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-wkzks,Uid:ebe6bc96-d3af-4a80-9856-19c3cea66c92,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1737372198861355421,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: c210a5df-415b-4d3d-b34b-88ae9c143a40,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: c210a5df-415b-4d3d-b34b-88ae9c143a40,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-wkzks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ebe6bc96-d3af-4a80-9856-19c3cea66c92,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:18.541683553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e20039265b2471bf608468ff141228d0cbba7b3c4181246e10b0641339973852,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-76f89f99b5-tj4wf,Uid:0b7ddc31-ab7d-4d5d-83f5-699020ce7592,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:17373721965
12465134,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tj4wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0b7ddc31-ab7d-4d5d-83f5-699020ce7592,pod-template-hash: 76f89f99b5,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:16.198002526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9119571557f81beb1dfc40d3afa66a03dbafdfaffb0f6a20245b29354be15ba4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:18b4f3d7-4d3f-45a1-8a8b-85631889c59a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372195879771318,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b4f3d7-4d3f-45a1-8a8b-85631889c59a,},Annotations:map[string]string{kubectl
.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-20T11:23:15.268000040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:138a28832750a884cd8eb7113044228d58571ca6093ab69d8fcda0c944eb9492,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:d31ebc36-32f0-4fde-bd54-6a98f1c9f971,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1737372194827232008,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d31ebc36-32f0-4fde-bd54-6a98f1c9f971,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\"
:[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-01-20T11:23:14.508740143Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7aff438d3a7a8d64f05e8cf0e85ff8a18a62a4314cb1466e468196c75c895609,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-xcklk,Uid:f89f7dce-db84-44de-8f83-c1a9b81a5ac8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372193480354163,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-xcklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89f7dce-db84-44de-8f83-c1a9b81a5ac8,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:13.118316579Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90d8a5170d6ede961d0cb605b854580d
f43242090e883a14ca1dd988480fad36,Metadata:&PodSandboxMetadata{Name:kube-proxy-8666g,Uid:c08ec316-b79d-4820-8586-73c10c289d0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372192421248161,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8666g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08ec316-b79d-4820-8586-73c10c289d0f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:10.615254091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f94950398688748a17471457b800f9ed0caffd3a503eec921ecb115718529af8,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-5zsqd,Uid:c72f4d6a-7287-441e-9918-8a4db07ca695,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372191979835095,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-5zsqd,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c72f4d6a-7287-441e-9918-8a4db07ca695,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:23:11.068953869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08302b9c41060eb122586275320b9c906855034c722e1ce07c326399f59f8a26,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-158281,Uid:b5afd54ecff6393731ebc69356b5bdc8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372180497178752,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5afd54ecff6393731ebc69356b5bdc8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5afd54ecff6393731ebc69356b5bdc8,kubernetes.io/config.seen: 2025-01-20T11:23:00.021625766Z,kubernetes.io/config.source: file,},RuntimeHandler:
,},&PodSandbox{Id:d135b9a571baa9097ff230e07233ed6f1af439fefd2ba1fac312f612b5a010e0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-158281,Uid:3d01003b56128f0a624f617eb73162e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372180486491433,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d01003b56128f0a624f617eb73162e8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d01003b56128f0a624f617eb73162e8,kubernetes.io/config.seen: 2025-01-20T11:23:00.021626701Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed675fb4c3e3457d2e18a1caaa54c0d4f5598db7676b78cc72e3ef312698124c,Metadata:&PodSandboxMetadata{Name:etcd-addons-158281,Uid:6976056fdb8201f5330f648e5e08d70d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372180476318129,Labels:map[string]string{component: etcd,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6976056fdb8201f5330f648e5e08d70d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: 6976056fdb8201f5330f648e5e08d70d,kubernetes.io/config.seen: 2025-01-20T11:23:00.021621949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a959f393cf281fc4ce3b8acdc168f23e7a0df5736dd091f0a70dab9e755ed605,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-158281,Uid:5be29f3a2e0071ad4462e62aba5d5c4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372180475442750,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be29f3a2e0071ad4462e62aba5d5c4c,tier: control-plane,},Annotations:map[string]string{
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.113:8443,kubernetes.io/config.hash: 5be29f3a2e0071ad4462e62aba5d5c4c,kubernetes.io/config.seen: 2025-01-20T11:23:00.021624690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cc3893bb-d675-4950-8f44-69136a3cf3ee name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.897880633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f1bef37-1c9d-48c0-a0a0-d1f5fd8ef9d3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.897933041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f1bef37-1c9d-48c0-a0a0-d1f5fd8ef9d3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.898268910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0d9d85a65ce1d3c983464424dd21732da109e094cf00adfe9b5d1739a155294,PodSandboxId:865f1d0c9c9f442e6bf57115c905cfea1c9b860ba4e285e5f6926cca8da45cce,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737372324134667671,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa058638-8c52-452d-80ca-c0225f49ce0e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a078cdd2c240c57fac7618c1cfda66abfa66bfd24dacf0227ed7284c919d95a0,PodSandboxId:0a97413a9df6a333ccb612e49630ef77d61586249736488f811a7f690ff7091a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737372280905647056,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffeb7439-f2bd-4e16-b03c-51ac6665b4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afad4663acae415daefe9873732618ddb1a8de26cb9ff76bbd5b37818806c691,PodSandboxId:4a0f66ac1d44c3aaa17bdd610a4c163b77020883ccafe60f11d71ff44cea4a2a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737372269552321545,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-kcqnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e127c71-0293-4cd1-81b0-99a0f7808dff,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:42f49881287483af8256356b128061e9498b76477d7975975f5bdac934fad05b,PodSandboxId:9dc5dc2a7b9538f51671b4bf4fd88f7e5efb5c69225707190945fdf0d08d2b4e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254361535127,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xdlp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02f2df92-09ab-4589-8337-28b1b6c2c834,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ddded45c28a86a3a0d93af732d5b42bfbf6613b15ac8be0b92e14920b51cb3,PodSandboxId:dbc1c03e6db48b40bc346916439418cf8cff0964bf3c0eb41505f9805a94db45,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254237224036,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wkzks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ebe6bc96-d3af-4a80-9856-19c3cea66c92,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b228b2e47ed7cb7b99b0e61b273b9d49d885c68e6ac5e8d958097bd3b47b319,PodSandboxId:e20039265b2471bf608468ff141228d0cbba7b3c4181246e10b0641339973852,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737372240838179297,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tj4wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0b7ddc31-ab7d-4d5d-83f5-699020ce7592,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ce4dc1f12705bd9c0ac3a3f7f8cff890b03dbbf9e7738792d28ae6d9ec13b5,PodSandboxId:7aff438d3a7a8d64f05e8cf0e85ff8a18a62a4314cb1466e468196c75c895609,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737372222732420242,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89f7dce-db84-44de-8f83-c1a9b81a5ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786f6bca1859ffd724d3f97bf347a2379eccbdb85caa6506e36707a8334047fb,PodSandboxId:138a28832750a884cd8eb7113044228d58571ca6093ab69d8fcda0c944eb9492,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737372205965068129,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d31ebc36-32f0-4fde-bd54-6a98f1c9f971,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be5114147bd27b32093e6a3b646ed1226da6ebe9853af494cd61d6c0bf52fb9,PodSandboxId:9119571557f81beb1dfc40d3afa66a0
3dbafdfaffb0f6a20245b29354be15ba4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737372196943906592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b4f3d7-4d3f-45a1-8a8b-85631889c59a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96fc4775112c20c029cc19b3d7312afc8b3b4dc4d2480601e9d959ce3488c0b,PodSandboxId:f94950398688748a17471457b800f9ed0caffd3a503
eec921ecb115718529af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737372195426058777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5zsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72f4d6a-7287-441e-9918-8a4db07ca695,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458bf7cde99da45061dd779a9b942a958ababa26fe2d9fe5d627cc04e56c73f6,PodSandboxId:90d8a5170d6ede961d0cb605b854580df43242090e883a14ca1dd988480fad36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737372193260735668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8666g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08ec316-b79d-4820-8586-73c10c289d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:3c4a3fa3903d3a343459f4f9e7503b722e5c51275c89ffb9cfe4586cb4039abc,PodSandboxId:08302b9c41060eb122586275320b9c906855034c722e1ce07c326399f59f8a26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737372180918702174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5afd54ecff6393731ebc69356b5bdc8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:c94fefefab6e099367c5765167117c13cb1ccf55549eed4b2f60ffa33d42e394,PodSandboxId:d135b9a571baa9097ff230e07233ed6f1af439fefd2ba1fac312f612b5a010e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737372180922479567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d01003b56128f0a624f617eb73162e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:217439dc1324b4de7820b54e43f977e8865b515aa95f53653fec40383fef33fd,PodSandboxId:a959f393cf281fc4ce3b8acdc168f23e7a0df5736dd091f0a70dab9e755ed605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737372180899039018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be29f3a2e0071ad4462e62aba5d5c4c,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:d49147fc2984d6c7cba776e093cf6a632567c637b6d7afd244e9290aa3e4ce4e,PodSandboxId:ed675fb4c3e3457d2e18a1caaa54c0d4f5598db7676b78cc72e3ef312698124c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737372180891580726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6976056fdb8201f5330f648e5e08d70d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74
" id=6f1bef37-1c9d-48c0-a0a0-d1f5fd8ef9d3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.899490029Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: ed5738f4-dab1-4420-ab1e-03c7501e608f,},},}" file="otel-collector/interceptors.go:62" id=eb635d25-c49c-4717-b267-80e881c48731 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.899588331Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b7511c0bdf4acc678460508acfff5cb16a7536626d58e06e0cbe5250f46d7a38,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-xk4hg,Uid:ed5738f4-dab1-4420-ab1e-03c7501e608f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372463967871918,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-xk4hg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed5738f4-dab1-4420-ab1e-03c7501e608f,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:27:43.660353907Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=eb635d25-c49c-4717-b267-80e881c48731 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.899956787Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b7511c0bdf4acc678460508acfff5cb16a7536626d58e06e0cbe5250f46d7a38,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b8e2c1d7-3266-4e51-bedb-e18a7b631247 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.900046904Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b7511c0bdf4acc678460508acfff5cb16a7536626d58e06e0cbe5250f46d7a38,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-xk4hg,Uid:ed5738f4-dab1-4420-ab1e-03c7501e608f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737372463967871918,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-xk4hg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed5738f4-dab1-4420-ab1e-03c7501e608f,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T11:27:43.660353907Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=b8e2c1d7-3266-4e51-bedb-e18a7b631247 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.900439750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ed5738f4-dab1-4420-ab1e-03c7501e608f,},},}" file="otel-collector/interceptors.go:62" id=969c5487-94fc-4856-964f-fc49201617cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.900516946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=969c5487-94fc-4856-964f-fc49201617cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.900558042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=969c5487-94fc-4856-964f-fc49201617cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.904025685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ea40b12-cd88-4dc3-bfbf-a5a189960a8d name=/runtime.v1.RuntimeService/Version
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.904081399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ea40b12-cd88-4dc3-bfbf-a5a189960a8d name=/runtime.v1.RuntimeService/Version
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.904924467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f686fe16-18f9-460d-915e-363563a904e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.906010182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372464905992517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f686fe16-18f9-460d-915e-363563a904e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.906488077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=536fabc8-6db6-4fe6-98ca-c740bfe09237 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.906550865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=536fabc8-6db6-4fe6-98ca-c740bfe09237 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.906801707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0d9d85a65ce1d3c983464424dd21732da109e094cf00adfe9b5d1739a155294,PodSandboxId:865f1d0c9c9f442e6bf57115c905cfea1c9b860ba4e285e5f6926cca8da45cce,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737372324134667671,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa058638-8c52-452d-80ca-c0225f49ce0e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a078cdd2c240c57fac7618c1cfda66abfa66bfd24dacf0227ed7284c919d95a0,PodSandboxId:0a97413a9df6a333ccb612e49630ef77d61586249736488f811a7f690ff7091a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737372280905647056,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffeb7439-f2bd-4e16-b03c-51ac6665b4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afad4663acae415daefe9873732618ddb1a8de26cb9ff76bbd5b37818806c691,PodSandboxId:4a0f66ac1d44c3aaa17bdd610a4c163b77020883ccafe60f11d71ff44cea4a2a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737372269552321545,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-kcqnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e127c71-0293-4cd1-81b0-99a0f7808dff,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:42f49881287483af8256356b128061e9498b76477d7975975f5bdac934fad05b,PodSandboxId:9dc5dc2a7b9538f51671b4bf4fd88f7e5efb5c69225707190945fdf0d08d2b4e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254361535127,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xdlp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02f2df92-09ab-4589-8337-28b1b6c2c834,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ddded45c28a86a3a0d93af732d5b42bfbf6613b15ac8be0b92e14920b51cb3,PodSandboxId:dbc1c03e6db48b40bc346916439418cf8cff0964bf3c0eb41505f9805a94db45,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254237224036,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wkzks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ebe6bc96-d3af-4a80-9856-19c3cea66c92,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b228b2e47ed7cb7b99b0e61b273b9d49d885c68e6ac5e8d958097bd3b47b319,PodSandboxId:e20039265b2471bf608468ff141228d0cbba7b3c4181246e10b0641339973852,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737372240838179297,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tj4wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0b7ddc31-ab7d-4d5d-83f5-699020ce7592,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ce4dc1f12705bd9c0ac3a3f7f8cff890b03dbbf9e7738792d28ae6d9ec13b5,PodSandboxId:7aff438d3a7a8d64f05e8cf0e85ff8a18a62a4314cb1466e468196c75c895609,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737372222732420242,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89f7dce-db84-44de-8f83-c1a9b81a5ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786f6bca1859ffd724d3f97bf347a2379eccbdb85caa6506e36707a8334047fb,PodSandboxId:138a28832750a884cd8eb7113044228d58571ca6093ab69d8fcda0c944eb9492,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737372205965068129,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d31ebc36-32f0-4fde-bd54-6a98f1c9f971,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be5114147bd27b32093e6a3b646ed1226da6ebe9853af494cd61d6c0bf52fb9,PodSandboxId:9119571557f81beb1dfc40d3afa66a0
3dbafdfaffb0f6a20245b29354be15ba4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737372196943906592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b4f3d7-4d3f-45a1-8a8b-85631889c59a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96fc4775112c20c029cc19b3d7312afc8b3b4dc4d2480601e9d959ce3488c0b,PodSandboxId:f94950398688748a17471457b800f9ed0caffd3a503
eec921ecb115718529af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737372195426058777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5zsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72f4d6a-7287-441e-9918-8a4db07ca695,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458bf7cde99da45061dd779a9b942a958ababa26fe2d9fe5d627cc04e56c73f6,PodSandboxId:90d8a5170d6ede961d0cb605b854580df43242090e883a14ca1dd988480fad36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737372193260735668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8666g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08ec316-b79d-4820-8586-73c10c289d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:3c4a3fa3903d3a343459f4f9e7503b722e5c51275c89ffb9cfe4586cb4039abc,PodSandboxId:08302b9c41060eb122586275320b9c906855034c722e1ce07c326399f59f8a26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737372180918702174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5afd54ecff6393731ebc69356b5bdc8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:c94fefefab6e099367c5765167117c13cb1ccf55549eed4b2f60ffa33d42e394,PodSandboxId:d135b9a571baa9097ff230e07233ed6f1af439fefd2ba1fac312f612b5a010e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737372180922479567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d01003b56128f0a624f617eb73162e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:217439dc1324b4de7820b54e43f977e8865b515aa95f53653fec40383fef33fd,PodSandboxId:a959f393cf281fc4ce3b8acdc168f23e7a0df5736dd091f0a70dab9e755ed605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737372180899039018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be29f3a2e0071ad4462e62aba5d5c4c,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:d49147fc2984d6c7cba776e093cf6a632567c637b6d7afd244e9290aa3e4ce4e,PodSandboxId:ed675fb4c3e3457d2e18a1caaa54c0d4f5598db7676b78cc72e3ef312698124c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737372180891580726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6976056fdb8201f5330f648e5e08d70d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74
" id=536fabc8-6db6-4fe6-98ca-c740bfe09237 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.934459468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63055b61-9830-40df-9db7-8c35ccfc78a1 name=/runtime.v1.RuntimeService/Version
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.934536660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63055b61-9830-40df-9db7-8c35ccfc78a1 name=/runtime.v1.RuntimeService/Version
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.935542846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d964440-9a7e-4b13-891c-2e3d62158294 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.937227515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372464937208511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d964440-9a7e-4b13-891c-2e3d62158294 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.937826323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=652b81e0-eab3-4a0c-82d8-0dc75bf5364d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.937890255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=652b81e0-eab3-4a0c-82d8-0dc75bf5364d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 11:27:44 addons-158281 crio[658]: time="2025-01-20 11:27:44.938213899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0d9d85a65ce1d3c983464424dd21732da109e094cf00adfe9b5d1739a155294,PodSandboxId:865f1d0c9c9f442e6bf57115c905cfea1c9b860ba4e285e5f6926cca8da45cce,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737372324134667671,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aa058638-8c52-452d-80ca-c0225f49ce0e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a078cdd2c240c57fac7618c1cfda66abfa66bfd24dacf0227ed7284c919d95a0,PodSandboxId:0a97413a9df6a333ccb612e49630ef77d61586249736488f811a7f690ff7091a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737372280905647056,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ffeb7439-f2bd-4e16-b03c-51ac6665b4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afad4663acae415daefe9873732618ddb1a8de26cb9ff76bbd5b37818806c691,PodSandboxId:4a0f66ac1d44c3aaa17bdd610a4c163b77020883ccafe60f11d71ff44cea4a2a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737372269552321545,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-kcqnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1e127c71-0293-4cd1-81b0-99a0f7808dff,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:42f49881287483af8256356b128061e9498b76477d7975975f5bdac934fad05b,PodSandboxId:9dc5dc2a7b9538f51671b4bf4fd88f7e5efb5c69225707190945fdf0d08d2b4e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254361535127,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xdlp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02f2df92-09ab-4589-8337-28b1b6c2c834,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ddded45c28a86a3a0d93af732d5b42bfbf6613b15ac8be0b92e14920b51cb3,PodSandboxId:dbc1c03e6db48b40bc346916439418cf8cff0964bf3c0eb41505f9805a94db45,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737372254237224036,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wkzks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ebe6bc96-d3af-4a80-9856-19c3cea66c92,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b228b2e47ed7cb7b99b0e61b273b9d49d885c68e6ac5e8d958097bd3b47b319,PodSandboxId:e20039265b2471bf608468ff141228d0cbba7b3c4181246e10b0641339973852,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737372240838179297,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tj4wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0b7ddc31-ab7d-4d5d-83f5-699020ce7592,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ce4dc1f12705bd9c0ac3a3f7f8cff890b03dbbf9e7738792d28ae6d9ec13b5,PodSandboxId:7aff438d3a7a8d64f05e8cf0e85ff8a18a62a4314cb1466e468196c75c895609,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737372222732420242,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f89f7dce-db84-44de-8f83-c1a9b81a5ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786f6bca1859ffd724d3f97bf347a2379eccbdb85caa6506e36707a8334047fb,PodSandboxId:138a28832750a884cd8eb7113044228d58571ca6093ab69d8fcda0c944eb9492,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737372205965068129,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d31ebc36-32f0-4fde-bd54-6a98f1c9f971,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be5114147bd27b32093e6a3b646ed1226da6ebe9853af494cd61d6c0bf52fb9,PodSandboxId:9119571557f81beb1dfc40d3afa66a0
3dbafdfaffb0f6a20245b29354be15ba4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737372196943906592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b4f3d7-4d3f-45a1-8a8b-85631889c59a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96fc4775112c20c029cc19b3d7312afc8b3b4dc4d2480601e9d959ce3488c0b,PodSandboxId:f94950398688748a17471457b800f9ed0caffd3a503
eec921ecb115718529af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737372195426058777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5zsqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72f4d6a-7287-441e-9918-8a4db07ca695,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458bf7cde99da45061dd779a9b942a958ababa26fe2d9fe5d627cc04e56c73f6,PodSandboxId:90d8a5170d6ede961d0cb605b854580df43242090e883a14ca1dd988480fad36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737372193260735668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8666g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08ec316-b79d-4820-8586-73c10c289d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:3c4a3fa3903d3a343459f4f9e7503b722e5c51275c89ffb9cfe4586cb4039abc,PodSandboxId:08302b9c41060eb122586275320b9c906855034c722e1ce07c326399f59f8a26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737372180918702174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5afd54ecff6393731ebc69356b5bdc8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:c94fefefab6e099367c5765167117c13cb1ccf55549eed4b2f60ffa33d42e394,PodSandboxId:d135b9a571baa9097ff230e07233ed6f1af439fefd2ba1fac312f612b5a010e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737372180922479567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d01003b56128f0a624f617eb73162e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:217439dc1324b4de7820b54e43f977e8865b515aa95f53653fec40383fef33fd,PodSandboxId:a959f393cf281fc4ce3b8acdc168f23e7a0df5736dd091f0a70dab9e755ed605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737372180899039018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be29f3a2e0071ad4462e62aba5d5c4c,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:d49147fc2984d6c7cba776e093cf6a632567c637b6d7afd244e9290aa3e4ce4e,PodSandboxId:ed675fb4c3e3457d2e18a1caaa54c0d4f5598db7676b78cc72e3ef312698124c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737372180891580726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-158281,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6976056fdb8201f5330f648e5e08d70d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74
" id=652b81e0-eab3-4a0c-82d8-0dc75bf5364d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0d9d85a65ce1       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   865f1d0c9c9f4       nginx
	a078cdd2c240c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   0a97413a9df6a       busybox
	afad4663acae4       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   4a0f66ac1d44c       ingress-nginx-controller-56d7c84fd4-kcqnd
	42f4988128748       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   9dc5dc2a7b953       ingress-nginx-admission-patch-7xdlp
	d9ddded45c28a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   dbc1c03e6db48       ingress-nginx-admission-create-wkzks
	1b228b2e47ed7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   e20039265b247       local-path-provisioner-76f89f99b5-tj4wf
	49ce4dc1f1270       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   7aff438d3a7a8       amd-gpu-device-plugin-xcklk
	786f6bca1859f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   138a28832750a       kube-ingress-dns-minikube
	1be5114147bd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9119571557f81       storage-provisioner
	a96fc4775112c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   f949503986887       coredns-668d6bf9bc-5zsqd
	458bf7cde99da       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                             4 minutes ago       Running             kube-proxy                0                   90d8a5170d6ed       kube-proxy-8666g
	c94fefefab6e0       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                             4 minutes ago       Running             kube-scheduler            0                   d135b9a571baa       kube-scheduler-addons-158281
	3c4a3fa3903d3       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                             4 minutes ago       Running             kube-controller-manager   0                   08302b9c41060       kube-controller-manager-addons-158281
	217439dc1324b       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                             4 minutes ago       Running             kube-apiserver            0                   a959f393cf281       kube-apiserver-addons-158281
	d49147fc2984d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   ed675fb4c3e34       etcd-addons-158281
	
	
	==> coredns [a96fc4775112c20c029cc19b3d7312afc8b3b4dc4d2480601e9d959ce3488c0b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:50134 - 50474 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000525664s
	[INFO] 10.244.0.23:48882 - 14353 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000144138s
	[INFO] 10.244.0.23:42024 - 30395 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144235s
	[INFO] 10.244.0.23:37192 - 10273 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079975s
	[INFO] 10.244.0.23:39190 - 16833 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127822s
	[INFO] 10.244.0.23:47304 - 29349 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000372801s
	[INFO] 10.244.0.23:55545 - 45460 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000841167s
	[INFO] 10.244.0.23:36291 - 21796 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001193859s
	[INFO] 10.244.0.27:53931 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000381166s
	[INFO] 10.244.0.27:41573 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131735s
	
	
	==> describe nodes <==
	Name:               addons-158281
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-158281
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=addons-158281
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T11_23_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-158281
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 11:23:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-158281
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 11:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 11:25:49 +0000   Mon, 20 Jan 2025 11:23:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 11:25:49 +0000   Mon, 20 Jan 2025 11:23:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 11:25:49 +0000   Mon, 20 Jan 2025 11:23:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 11:25:49 +0000   Mon, 20 Jan 2025 11:23:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    addons-158281
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 24bf314620d045bd98e935ac573556c4
	  System UUID:                24bf3146-20d0-45bd-98e9-35ac573556c4
	  Boot ID:                    93528b8d-d09f-44f7-a30f-e03ff4c397d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-7d9564db4-xk4hg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-kcqnd    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-xcklk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 coredns-668d6bf9bc-5zsqd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m35s
	  kube-system                 etcd-addons-158281                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m40s
	  kube-system                 kube-apiserver-addons-158281                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-158281        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-8666g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-158281                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  local-path-storage          local-path-provisioner-76f89f99b5-tj4wf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node addons-158281 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node addons-158281 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node addons-158281 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m39s                  kubelet          Node addons-158281 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s                  kubelet          Node addons-158281 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s                  kubelet          Node addons-158281 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m38s                  kubelet          Node addons-158281 status is now: NodeReady
	  Normal  RegisteredNode           4m36s                  node-controller  Node addons-158281 event: Registered Node addons-158281 in Controller
	
	
	==> dmesg <==
	[  +0.070431] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.276563] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.127230] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.128980] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.153734] kauditd_printk_skb: 159 callbacks suppressed
	[  +5.193261] kauditd_printk_skb: 67 callbacks suppressed
	[ +25.750911] kauditd_printk_skb: 2 callbacks suppressed
	[Jan20 11:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.317419] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.228406] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.056567] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.072008] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.355597] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.272093] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.077913] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.253275] kauditd_printk_skb: 2 callbacks suppressed
	[Jan20 11:25] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.327273] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.398555] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.060876] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.005399] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.687472] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.140750] kauditd_printk_skb: 2 callbacks suppressed
	[Jan20 11:26] kauditd_printk_skb: 7 callbacks suppressed
	[Jan20 11:27] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [d49147fc2984d6c7cba776e093cf6a632567c637b6d7afd244e9290aa3e4ce4e] <==
	{"level":"warn","ts":"2025-01-20T11:24:28.227794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.471256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:24:28.227822Z","caller":"traceutil/trace.go:171","msg":"trace[1434549439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.518383ms","start":"2025-01-20T11:24:28.107299Z","end":"2025-01-20T11:24:28.227817Z","steps":["trace[1434549439] 'agreement among raft nodes before linearized reading'  (duration: 120.484377ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:24:31.241278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.066495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:24:31.241438Z","caller":"traceutil/trace.go:171","msg":"trace[708215346] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1131; }","duration":"122.286439ms","start":"2025-01-20T11:24:31.119140Z","end":"2025-01-20T11:24:31.241426Z","steps":["trace[708215346] 'range keys from in-memory index tree'  (duration: 122.016014ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:24:31.241447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.265067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:24:31.241594Z","caller":"traceutil/trace.go:171","msg":"trace[1740053298] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1131; }","duration":"219.63076ms","start":"2025-01-20T11:24:31.021952Z","end":"2025-01-20T11:24:31.241583Z","steps":["trace[1740053298] 'range keys from in-memory index tree'  (duration: 219.194766ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:24:31.241922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.39942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:24:31.241960Z","caller":"traceutil/trace.go:171","msg":"trace[2072802745] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1131; }","duration":"279.458436ms","start":"2025-01-20T11:24:30.962495Z","end":"2025-01-20T11:24:31.241954Z","steps":["trace[2072802745] 'range keys from in-memory index tree'  (duration: 279.356523ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T11:25:01.153848Z","caller":"traceutil/trace.go:171","msg":"trace[76100622] linearizableReadLoop","detail":"{readStateIndex:1347; appliedIndex:1346; }","duration":"135.449997ms","start":"2025-01-20T11:25:01.018373Z","end":"2025-01-20T11:25:01.153823Z","steps":["trace[76100622] 'read index received'  (duration: 135.06621ms)","trace[76100622] 'applied index is now lower than readState.Index'  (duration: 383.369µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T11:25:01.154067Z","caller":"traceutil/trace.go:171","msg":"trace[1781010944] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"271.296368ms","start":"2025-01-20T11:25:00.882764Z","end":"2025-01-20T11:25:01.154061Z","steps":["trace[1781010944] 'process raft request'  (duration: 270.85803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:25:01.154292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.865206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:25:01.154327Z","caller":"traceutil/trace.go:171","msg":"trace[1746143935] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1306; }","duration":"135.970564ms","start":"2025-01-20T11:25:01.018350Z","end":"2025-01-20T11:25:01.154321Z","steps":["trace[1746143935] 'agreement among raft nodes before linearized reading'  (duration: 135.858456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:25:01.154982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.901591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:25:01.155140Z","caller":"traceutil/trace.go:171","msg":"trace[701182978] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"131.050262ms","start":"2025-01-20T11:25:01.024040Z","end":"2025-01-20T11:25:01.155091Z","steps":["trace[701182978] 'agreement among raft nodes before linearized reading'  (duration: 130.901901ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:25:01.164438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.243498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:25:01.164501Z","caller":"traceutil/trace.go:171","msg":"trace[947711375] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"109.335452ms","start":"2025-01-20T11:25:01.055156Z","end":"2025-01-20T11:25:01.164492Z","steps":["trace[947711375] 'agreement among raft nodes before linearized reading'  (duration: 100.815851ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:25:01.164644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.459301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:25:01.164677Z","caller":"traceutil/trace.go:171","msg":"trace[1892095680] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"109.514642ms","start":"2025-01-20T11:25:01.055156Z","end":"2025-01-20T11:25:01.164671Z","steps":["trace[1892095680] 'agreement among raft nodes before linearized reading'  (duration: 100.80788ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:25:01.164781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.312563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T11:25:01.166157Z","caller":"traceutil/trace.go:171","msg":"trace[144681830] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"118.70663ms","start":"2025-01-20T11:25:01.047437Z","end":"2025-01-20T11:25:01.166143Z","steps":["trace[144681830] 'agreement among raft nodes before linearized reading'  (duration: 108.565153ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T11:25:24.013999Z","caller":"traceutil/trace.go:171","msg":"trace[1116959123] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"212.436211ms","start":"2025-01-20T11:25:23.801549Z","end":"2025-01-20T11:25:24.013986Z","steps":["trace[1116959123] 'process raft request'  (duration: 212.353324ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T11:26:04.451495Z","caller":"traceutil/trace.go:171","msg":"trace[1663641349] linearizableReadLoop","detail":"{readStateIndex:1794; appliedIndex:1793; }","duration":"195.773435ms","start":"2025-01-20T11:26:04.255710Z","end":"2025-01-20T11:26:04.451483Z","steps":["trace[1663641349] 'read index received'  (duration: 195.649019ms)","trace[1663641349] 'applied index is now lower than readState.Index'  (duration: 124.001µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T11:26:04.451564Z","caller":"traceutil/trace.go:171","msg":"trace[1953265683] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1733; }","duration":"198.632699ms","start":"2025-01-20T11:26:04.252927Z","end":"2025-01-20T11:26:04.451560Z","steps":["trace[1953265683] 'process raft request'  (duration: 198.462828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T11:26:04.451791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.082457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-20T11:26:04.451814Z","caller":"traceutil/trace.go:171","msg":"trace[2100206629] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1733; }","duration":"196.144347ms","start":"2025-01-20T11:26:04.255664Z","end":"2025-01-20T11:26:04.451809Z","steps":["trace[2100206629] 'agreement among raft nodes before linearized reading'  (duration: 196.089345ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:27:45 up 5 min,  0 users,  load average: 0.30, 1.02, 0.56
	Linux addons-158281 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [217439dc1324b4de7820b54e43f977e8865b515aa95f53653fec40383fef33fd] <==
	E0120 11:23:53.642463       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0120 11:24:47.506959       1 conn.go:339] Error on socket receive: read tcp 192.168.39.113:8443->192.168.39.1:39266: use of closed network connection
	E0120 11:24:47.684260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.113:8443->192.168.39.1:39298: use of closed network connection
	I0120 11:24:56.955468       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.61.11"}
	I0120 11:25:16.158548       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0120 11:25:16.381797       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.79.223"}
	I0120 11:25:19.442029       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0120 11:25:20.472846       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0120 11:25:40.517889       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0120 11:25:54.629187       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0120 11:26:03.253315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 11:26:03.253349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 11:26:03.286045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 11:26:03.286317       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 11:26:03.289515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 11:26:03.289620       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 11:26:03.302612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 11:26:03.303262       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0120 11:26:03.403928       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I0120 11:26:03.486094       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 11:26:03.486172       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0120 11:26:04.290518       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0120 11:26:04.486928       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0120 11:26:04.498504       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0120 11:27:43.817183       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.49.54"}
	
	
	==> kube-controller-manager [3c4a3fa3903d3a343459f4f9e7503b722e5c51275c89ffb9cfe4586cb4039abc] <==
	E0120 11:26:34.956350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 11:26:37.350211       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 11:26:37.351146       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0120 11:26:37.352259       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 11:26:37.352350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 11:27:00.928059       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 11:27:00.929175       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0120 11:27:00.930065       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 11:27:00.930095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 11:27:07.130705       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 11:27:07.131872       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0120 11:27:07.132735       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 11:27:07.132796       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 11:27:11.696893       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 11:27:11.697855       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 11:27:11.698642       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 11:27:11.698687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 11:27:14.617064       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 11:27:14.618253       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0120 11:27:14.619011       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 11:27:14.619077       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 11:27:43.658387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="22.956368ms"
	I0120 11:27:43.671329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.895314ms"
	I0120 11:27:43.688234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.861114ms"
	I0120 11:27:43.688353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="28.856µs"
	
	
	==> kube-proxy [458bf7cde99da45061dd779a9b942a958ababa26fe2d9fe5d627cc04e56c73f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 11:23:15.611933       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 11:23:15.770094       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0120 11:23:15.770770       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 11:23:16.119459       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 11:23:16.119514       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 11:23:16.119539       1 server_linux.go:170] "Using iptables Proxier"
	I0120 11:23:16.334508       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 11:23:16.335026       1 server.go:497] "Version info" version="v1.32.0"
	I0120 11:23:16.335052       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 11:23:16.369591       1 config.go:199] "Starting service config controller"
	I0120 11:23:16.369633       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 11:23:16.369659       1 config.go:105] "Starting endpoint slice config controller"
	I0120 11:23:16.369663       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 11:23:16.370237       1 config.go:329] "Starting node config controller"
	I0120 11:23:16.370260       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 11:23:16.571008       1 shared_informer.go:320] Caches are synced for node config
	I0120 11:23:16.571045       1 shared_informer.go:320] Caches are synced for service config
	I0120 11:23:16.571055       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c94fefefab6e099367c5765167117c13cb1ccf55549eed4b2f60ffa33d42e394] <==
	W0120 11:23:03.325168       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 11:23:03.328648       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:03.328825       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 11:23:03.328880       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 11:23:04.161744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 11:23:04.161790       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 11:23:04.162526       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 11:23:04.162621       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.184780       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 11:23:04.184896       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.196441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 11:23:04.196527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.208879       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 11:23:04.208934       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.233289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 11:23:04.233326       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.238524       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 11:23:04.238566       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.538376       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 11:23:04.538494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.545413       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 11:23:04.545537       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 11:23:04.612209       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 11:23:04.612307       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 11:23:07.016221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 11:27:05 addons-158281 kubelet[1227]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 11:27:05 addons-158281 kubelet[1227]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 11:27:05 addons-158281 kubelet[1227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 11:27:05 addons-158281 kubelet[1227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 11:27:06 addons-158281 kubelet[1227]: E0120 11:27:06.222332    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372426221654100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:06 addons-158281 kubelet[1227]: E0120 11:27:06.222357    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372426221654100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:16 addons-158281 kubelet[1227]: E0120 11:27:16.223994    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372436223763801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:16 addons-158281 kubelet[1227]: E0120 11:27:16.224016    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372436223763801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:26 addons-158281 kubelet[1227]: E0120 11:27:26.226509    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372446225942842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:26 addons-158281 kubelet[1227]: E0120 11:27:26.226877    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372446225942842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:36 addons-158281 kubelet[1227]: E0120 11:27:36.231070    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372456230554732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:36 addons-158281 kubelet[1227]: E0120 11:27:36.231395    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737372456230554732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595294,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660625    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="946601d8-93c2-4dad-9121-12b02f2d86aa" containerName="volume-snapshot-controller"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660665    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="liveness-probe"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660672    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="csi-snapshotter"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660679    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="be4b0ad7-05a4-47c2-8407-a6fbd9f55a17" containerName="csi-resizer"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660684    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="hostpath"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660688    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe03a673-9ccb-4593-9e74-733070f2d568" containerName="task-pv-container"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660695    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="csi-provisioner"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660700    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="bc76a6b4-edc2-4120-95c7-c1752c6fb852" containerName="volume-snapshot-controller"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660705    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="83df8c8e-e2b2-42ae-9444-8eae3b349fbf" containerName="csi-attacher"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660711    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="csi-external-health-monitor-controller"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.660715    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="09928bb3-eae6-47bc-8f67-45cb2bda653a" containerName="node-driver-registrar"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.665064    1227 status_manager.go:890] "Failed to get status for pod" podUID="ed5738f4-dab1-4420-ab1e-03c7501e608f" pod="default/hello-world-app-7d9564db4-xk4hg" err="pods \"hello-world-app-7d9564db4-xk4hg\" is forbidden: User \"system:node:addons-158281\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-158281' and this object"
	Jan 20 11:27:43 addons-158281 kubelet[1227]: I0120 11:27:43.804966    1227 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkthw\" (UniqueName: \"kubernetes.io/projected/ed5738f4-dab1-4420-ab1e-03c7501e608f-kube-api-access-kkthw\") pod \"hello-world-app-7d9564db4-xk4hg\" (UID: \"ed5738f4-dab1-4420-ab1e-03c7501e608f\") " pod="default/hello-world-app-7d9564db4-xk4hg"
	
	
	==> storage-provisioner [1be5114147bd27b32093e6a3b646ed1226da6ebe9853af494cd61d6c0bf52fb9] <==
	I0120 11:23:17.651595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 11:23:17.702541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 11:23:17.702618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 11:23:17.717947       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 11:23:17.718081       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-158281_3901dd06-f62a-4a85-9954-993c15564d0d!
	I0120 11:23:17.718178       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a3969fb8-037e-4cd4-984f-fb54ff79218d", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-158281_3901dd06-f62a-4a85-9954-993c15564d0d became leader
	I0120 11:23:17.819198       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-158281_3901dd06-f62a-4a85-9954-993c15564d0d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-158281 -n addons-158281
helpers_test.go:261: (dbg) Run:  kubectl --context addons-158281 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-xk4hg ingress-nginx-admission-create-wkzks ingress-nginx-admission-patch-7xdlp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-158281 describe pod hello-world-app-7d9564db4-xk4hg ingress-nginx-admission-create-wkzks ingress-nginx-admission-patch-7xdlp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-158281 describe pod hello-world-app-7d9564db4-xk4hg ingress-nginx-admission-create-wkzks ingress-nginx-admission-patch-7xdlp: exit status 1 (70.79505ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-xk4hg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-158281/192.168.39.113
	Start Time:       Mon, 20 Jan 2025 11:27:43 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kkthw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kkthw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-xk4hg to addons-158281
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wkzks" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7xdlp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-158281 describe pod hello-world-app-7d9564db4-xk4hg ingress-nginx-admission-create-wkzks ingress-nginx-admission-patch-7xdlp: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable ingress-dns --alsologtostderr -v=1: (1.242052319s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable ingress --alsologtostderr -v=1: (7.660506027s)
--- FAIL: TestAddons/parallel/Ingress (159.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 image ls --format short --alsologtostderr: (2.246528263s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-473856 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-473856 image ls --format short --alsologtostderr:
I0120 11:33:05.958084  958409 out.go:345] Setting OutFile to fd 1 ...
I0120 11:33:05.958205  958409 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:05.958217  958409 out.go:358] Setting ErrFile to fd 2...
I0120 11:33:05.958224  958409 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:05.958378  958409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
I0120 11:33:05.959022  958409 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:05.959132  958409 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:05.959471  958409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:05.959529  958409 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:05.974358  958409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
I0120 11:33:05.974832  958409 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:05.975500  958409 main.go:141] libmachine: Using API Version  1
I0120 11:33:05.975534  958409 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:05.975881  958409 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:05.976131  958409 main.go:141] libmachine: (functional-473856) Calling .GetState
I0120 11:33:05.977977  958409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:05.978020  958409 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:05.992493  958409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
I0120 11:33:05.992974  958409 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:05.993594  958409 main.go:141] libmachine: Using API Version  1
I0120 11:33:05.993621  958409 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:05.994027  958409 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:05.994336  958409 main.go:141] libmachine: (functional-473856) Calling .DriverName
I0120 11:33:05.994595  958409 ssh_runner.go:195] Run: systemctl --version
I0120 11:33:05.994623  958409 main.go:141] libmachine: (functional-473856) Calling .GetSSHHostname
I0120 11:33:05.997602  958409 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:05.998062  958409 main.go:141] libmachine: (functional-473856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:21:36", ip: ""} in network mk-functional-473856: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:32 +0000 UTC Type:0 Mac:52:54:00:9d:21:36 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-473856 Clientid:01:52:54:00:9d:21:36}
I0120 11:33:05.998093  958409 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined IP address 192.168.39.214 and MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:05.998265  958409 main.go:141] libmachine: (functional-473856) Calling .GetSSHPort
I0120 11:33:05.998430  958409 main.go:141] libmachine: (functional-473856) Calling .GetSSHKeyPath
I0120 11:33:05.998578  958409 main.go:141] libmachine: (functional-473856) Calling .GetSSHUsername
I0120 11:33:05.998760  958409 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/functional-473856/id_rsa Username:docker}
I0120 11:33:06.100164  958409 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:33:08.149959  958409 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.049757122s)
W0120 11:33:08.150044  958409 cache_images.go:734] Failed to list images for profile functional-473856 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0120 11:33:08.133410    8250 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-01-20T11:33:08Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0120 11:33:08.150113  958409 main.go:141] libmachine: Making call to close driver server
I0120 11:33:08.150132  958409 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:08.150476  958409 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:08.150496  958409 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:08.150507  958409 main.go:141] libmachine: Making call to close driver server
I0120 11:33:08.150534  958409 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:08.150811  958409 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:08.150829  958409 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestPreload (173.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-013266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0120 12:14:37.399635  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-013266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.951511036s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-013266 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-013266 image pull gcr.io/k8s-minikube/busybox: (3.075416802s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-013266
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-013266: (7.285255149s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-013266 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-013266 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.791543566s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-013266 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-20 12:17:14.516628282 +0000 UTC m=+3325.762724685
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-013266 -n test-preload-013266
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-013266 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-013266 logs -n 25: (1.020019345s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-222827 ssh -n                                                                 | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	|         | multinode-222827-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-222827 ssh -n multinode-222827 sudo cat                                       | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	|         | /home/docker/cp-test_multinode-222827-m03_multinode-222827.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-222827 cp multinode-222827-m03:/home/docker/cp-test.txt                       | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	|         | multinode-222827-m02:/home/docker/cp-test_multinode-222827-m03_multinode-222827-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-222827 ssh -n                                                                 | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	|         | multinode-222827-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-222827 ssh -n multinode-222827-m02 sudo cat                                   | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	|         | /home/docker/cp-test_multinode-222827-m03_multinode-222827-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-222827 node stop m03                                                          | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:02 UTC |
	| node    | multinode-222827 node start                                                             | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:02 UTC | 20 Jan 25 12:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-222827                                                                | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:03 UTC |                     |
	| stop    | -p multinode-222827                                                                     | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:03 UTC | 20 Jan 25 12:06 UTC |
	| start   | -p multinode-222827                                                                     | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:06 UTC | 20 Jan 25 12:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-222827                                                                | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:08 UTC |                     |
	| node    | multinode-222827 node delete                                                            | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:08 UTC | 20 Jan 25 12:09 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-222827 stop                                                                   | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:09 UTC | 20 Jan 25 12:12 UTC |
	| start   | -p multinode-222827                                                                     | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:12 UTC | 20 Jan 25 12:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-222827                                                                | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:13 UTC |                     |
	| start   | -p multinode-222827-m02                                                                 | multinode-222827-m02 | jenkins | v1.35.0 | 20 Jan 25 12:13 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-222827-m03                                                                 | multinode-222827-m03 | jenkins | v1.35.0 | 20 Jan 25 12:13 UTC | 20 Jan 25 12:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-222827                                                                 | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:14 UTC |                     |
	| delete  | -p multinode-222827-m03                                                                 | multinode-222827-m03 | jenkins | v1.35.0 | 20 Jan 25 12:14 UTC | 20 Jan 25 12:14 UTC |
	| delete  | -p multinode-222827                                                                     | multinode-222827     | jenkins | v1.35.0 | 20 Jan 25 12:14 UTC | 20 Jan 25 12:14 UTC |
	| start   | -p test-preload-013266                                                                  | test-preload-013266  | jenkins | v1.35.0 | 20 Jan 25 12:14 UTC | 20 Jan 25 12:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-013266 image pull                                                          | test-preload-013266  | jenkins | v1.35.0 | 20 Jan 25 12:15 UTC | 20 Jan 25 12:15 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-013266                                                                  | test-preload-013266  | jenkins | v1.35.0 | 20 Jan 25 12:15 UTC | 20 Jan 25 12:16 UTC |
	| start   | -p test-preload-013266                                                                  | test-preload-013266  | jenkins | v1.35.0 | 20 Jan 25 12:16 UTC | 20 Jan 25 12:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-013266 image list                                                          | test-preload-013266  | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC | 20 Jan 25 12:17 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:16:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:16:04.546896  980156 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:16:04.547149  980156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:16:04.547159  980156 out.go:358] Setting ErrFile to fd 2...
	I0120 12:16:04.547163  980156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:16:04.547355  980156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:16:04.547892  980156 out.go:352] Setting JSON to false
	I0120 12:16:04.548830  980156 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":17907,"bootTime":1737357457,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:16:04.548931  980156 start.go:139] virtualization: kvm guest
	I0120 12:16:04.551192  980156 out.go:177] * [test-preload-013266] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:16:04.552575  980156 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:16:04.552587  980156 notify.go:220] Checking for updates...
	I0120 12:16:04.555088  980156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:16:04.556538  980156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:16:04.557871  980156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:16:04.559237  980156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:16:04.560510  980156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:16:04.562152  980156 config.go:182] Loaded profile config "test-preload-013266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 12:16:04.562769  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:16:04.562808  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:16:04.578979  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0120 12:16:04.579426  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:16:04.580055  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:16:04.580078  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:16:04.580442  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:16:04.580682  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:04.582376  980156 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:16:04.583517  980156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:16:04.583826  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:16:04.583880  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:16:04.598192  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0120 12:16:04.598625  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:16:04.599050  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:16:04.599075  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:16:04.599384  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:16:04.599593  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:04.633847  980156 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:16:04.635140  980156 start.go:297] selected driver: kvm2
	I0120 12:16:04.635155  980156 start.go:901] validating driver "kvm2" against &{Name:test-preload-013266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-013266
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:16:04.635275  980156 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:16:04.635936  980156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:16:04.636032  980156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:16:04.650629  980156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:16:04.651100  980156 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:16:04.651146  980156 cni.go:84] Creating CNI manager for ""
	I0120 12:16:04.651200  980156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:16:04.651272  980156 start.go:340] cluster config:
	{Name:test-preload-013266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-013266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:16:04.651418  980156 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:16:04.653203  980156 out.go:177] * Starting "test-preload-013266" primary control-plane node in "test-preload-013266" cluster
	I0120 12:16:04.654723  980156 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 12:16:05.545174  980156 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 12:16:05.545229  980156 cache.go:56] Caching tarball of preloaded images
	I0120 12:16:05.545398  980156 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 12:16:05.549984  980156 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0120 12:16:05.551185  980156 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 12:16:05.650771  980156 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 12:16:16.549551  980156 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 12:16:16.549640  980156 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 12:16:17.411733  980156 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0120 12:16:17.411867  980156 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/config.json ...
	I0120 12:16:17.412118  980156 start.go:360] acquireMachinesLock for test-preload-013266: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:16:17.412199  980156 start.go:364] duration metric: took 55.045µs to acquireMachinesLock for "test-preload-013266"
	I0120 12:16:17.412224  980156 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:16:17.412237  980156 fix.go:54] fixHost starting: 
	I0120 12:16:17.412523  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:16:17.412571  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:16:17.427298  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0120 12:16:17.427732  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:16:17.428193  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:16:17.428222  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:16:17.428573  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:16:17.428791  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:17.428940  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetState
	I0120 12:16:17.430636  980156 fix.go:112] recreateIfNeeded on test-preload-013266: state=Stopped err=<nil>
	I0120 12:16:17.430661  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	W0120 12:16:17.430827  980156 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:16:17.432962  980156 out.go:177] * Restarting existing kvm2 VM for "test-preload-013266" ...
	I0120 12:16:17.434286  980156 main.go:141] libmachine: (test-preload-013266) Calling .Start
	I0120 12:16:17.434476  980156 main.go:141] libmachine: (test-preload-013266) starting domain...
	I0120 12:16:17.434500  980156 main.go:141] libmachine: (test-preload-013266) ensuring networks are active...
	I0120 12:16:17.435156  980156 main.go:141] libmachine: (test-preload-013266) Ensuring network default is active
	I0120 12:16:17.435550  980156 main.go:141] libmachine: (test-preload-013266) Ensuring network mk-test-preload-013266 is active
	I0120 12:16:17.435933  980156 main.go:141] libmachine: (test-preload-013266) getting domain XML...
	I0120 12:16:17.436705  980156 main.go:141] libmachine: (test-preload-013266) creating domain...
	I0120 12:16:18.616641  980156 main.go:141] libmachine: (test-preload-013266) waiting for IP...
	I0120 12:16:18.617501  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:18.617870  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:18.617983  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:18.617859  980224 retry.go:31] will retry after 204.540966ms: waiting for domain to come up
	I0120 12:16:18.824324  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:18.824795  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:18.824833  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:18.824768  980224 retry.go:31] will retry after 243.019096ms: waiting for domain to come up
	I0120 12:16:19.069234  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:19.069648  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:19.069677  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:19.069623  980224 retry.go:31] will retry after 470.172397ms: waiting for domain to come up
	I0120 12:16:19.541149  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:19.541496  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:19.541520  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:19.541478  980224 retry.go:31] will retry after 419.673045ms: waiting for domain to come up
	I0120 12:16:19.963434  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:19.963935  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:19.963971  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:19.963901  980224 retry.go:31] will retry after 521.254724ms: waiting for domain to come up
	I0120 12:16:20.486494  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:20.486964  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:20.486988  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:20.486903  980224 retry.go:31] will retry after 830.548943ms: waiting for domain to come up
	I0120 12:16:21.318877  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:21.319247  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:21.319274  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:21.319216  980224 retry.go:31] will retry after 817.303778ms: waiting for domain to come up
	I0120 12:16:22.137653  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:22.138061  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:22.138088  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:22.138025  980224 retry.go:31] will retry after 983.49907ms: waiting for domain to come up
	I0120 12:16:23.123417  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:23.123743  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:23.123787  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:23.123715  980224 retry.go:31] will retry after 1.425668236s: waiting for domain to come up
	I0120 12:16:24.551217  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:24.551649  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:24.551672  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:24.551618  980224 retry.go:31] will retry after 2.152494257s: waiting for domain to come up
	I0120 12:16:26.707108  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:26.707613  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:26.707633  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:26.707564  980224 retry.go:31] will retry after 2.437038968s: waiting for domain to come up
	I0120 12:16:29.145855  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:29.146259  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:29.146310  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:29.146217  980224 retry.go:31] will retry after 2.909294279s: waiting for domain to come up
	I0120 12:16:32.059399  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:32.059838  980156 main.go:141] libmachine: (test-preload-013266) DBG | unable to find current IP address of domain test-preload-013266 in network mk-test-preload-013266
	I0120 12:16:32.059858  980156 main.go:141] libmachine: (test-preload-013266) DBG | I0120 12:16:32.059805  980224 retry.go:31] will retry after 4.216633203s: waiting for domain to come up
	I0120 12:16:36.280741  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.281139  980156 main.go:141] libmachine: (test-preload-013266) found domain IP: 192.168.39.82
	I0120 12:16:36.281170  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has current primary IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.281179  980156 main.go:141] libmachine: (test-preload-013266) reserving static IP address...
	I0120 12:16:36.281564  980156 main.go:141] libmachine: (test-preload-013266) reserved static IP address 192.168.39.82 for domain test-preload-013266
	I0120 12:16:36.281611  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "test-preload-013266", mac: "52:54:00:9a:10:44", ip: "192.168.39.82"} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.281628  980156 main.go:141] libmachine: (test-preload-013266) waiting for SSH...
	I0120 12:16:36.281655  980156 main.go:141] libmachine: (test-preload-013266) DBG | skip adding static IP to network mk-test-preload-013266 - found existing host DHCP lease matching {name: "test-preload-013266", mac: "52:54:00:9a:10:44", ip: "192.168.39.82"}
	I0120 12:16:36.281670  980156 main.go:141] libmachine: (test-preload-013266) DBG | Getting to WaitForSSH function...
	I0120 12:16:36.283581  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.283861  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.283893  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.283974  980156 main.go:141] libmachine: (test-preload-013266) DBG | Using SSH client type: external
	I0120 12:16:36.284003  980156 main.go:141] libmachine: (test-preload-013266) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa (-rw-------)
	I0120 12:16:36.284042  980156 main.go:141] libmachine: (test-preload-013266) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:16:36.284055  980156 main.go:141] libmachine: (test-preload-013266) DBG | About to run SSH command:
	I0120 12:16:36.284068  980156 main.go:141] libmachine: (test-preload-013266) DBG | exit 0
	I0120 12:16:36.409811  980156 main.go:141] libmachine: (test-preload-013266) DBG | SSH cmd err, output: <nil>: 
	I0120 12:16:36.410235  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetConfigRaw
	I0120 12:16:36.410894  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetIP
	I0120 12:16:36.413135  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.413475  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.413522  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.413718  980156 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/config.json ...
	I0120 12:16:36.413898  980156 machine.go:93] provisionDockerMachine start ...
	I0120 12:16:36.413916  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:36.414179  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:36.416254  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.416582  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.416612  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.416778  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:36.416964  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.417157  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.417317  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:36.417519  980156 main.go:141] libmachine: Using SSH client type: native
	I0120 12:16:36.417740  980156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0120 12:16:36.417755  980156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:16:36.526128  980156 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:16:36.526152  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetMachineName
	I0120 12:16:36.526421  980156 buildroot.go:166] provisioning hostname "test-preload-013266"
	I0120 12:16:36.526463  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetMachineName
	I0120 12:16:36.526667  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:36.529387  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.529795  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.529821  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.529942  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:36.530121  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.530311  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.530460  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:36.530729  980156 main.go:141] libmachine: Using SSH client type: native
	I0120 12:16:36.530922  980156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0120 12:16:36.530936  980156 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-013266 && echo "test-preload-013266" | sudo tee /etc/hostname
	I0120 12:16:36.652660  980156 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-013266
	
	I0120 12:16:36.652690  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:36.655413  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.655811  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.655839  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.656041  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:36.656222  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.656385  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.656505  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:36.656629  980156 main.go:141] libmachine: Using SSH client type: native
	I0120 12:16:36.656823  980156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0120 12:16:36.656840  980156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-013266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-013266/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-013266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:16:36.774072  980156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:16:36.774094  980156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:16:36.774119  980156 buildroot.go:174] setting up certificates
	I0120 12:16:36.774129  980156 provision.go:84] configureAuth start
	I0120 12:16:36.774139  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetMachineName
	I0120 12:16:36.774396  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetIP
	I0120 12:16:36.776849  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.777212  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.777244  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.777362  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:36.779867  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.780199  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.780224  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.780362  980156 provision.go:143] copyHostCerts
	I0120 12:16:36.780424  980156 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:16:36.780448  980156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:16:36.780523  980156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:16:36.780625  980156 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:16:36.780637  980156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:16:36.780674  980156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:16:36.780806  980156 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:16:36.780818  980156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:16:36.780846  980156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:16:36.780922  980156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.test-preload-013266 san=[127.0.0.1 192.168.39.82 localhost minikube test-preload-013266]
	I0120 12:16:36.985745  980156 provision.go:177] copyRemoteCerts
	I0120 12:16:36.985802  980156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:16:36.985829  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:36.988591  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.988892  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:36.988919  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:36.989065  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:36.989274  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:36.989430  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:36.989547  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:16:37.071736  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:16:37.093229  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 12:16:37.113827  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:16:37.134610  980156 provision.go:87] duration metric: took 360.47074ms to configureAuth
	I0120 12:16:37.134631  980156 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:16:37.134775  980156 config.go:182] Loaded profile config "test-preload-013266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 12:16:37.134860  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:37.137810  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.138148  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.138171  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.138346  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:37.138555  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.138706  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.138816  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:37.138949  980156 main.go:141] libmachine: Using SSH client type: native
	I0120 12:16:37.139156  980156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0120 12:16:37.139181  980156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:16:37.356781  980156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:16:37.356822  980156 machine.go:96] duration metric: took 942.908651ms to provisionDockerMachine
	I0120 12:16:37.356842  980156 start.go:293] postStartSetup for "test-preload-013266" (driver="kvm2")
	I0120 12:16:37.356859  980156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:16:37.356887  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:37.357259  980156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:16:37.357299  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:37.360234  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.360607  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.360642  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.360793  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:37.360981  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.361234  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:37.361410  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:16:37.443572  980156 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:16:37.447216  980156 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:16:37.447235  980156 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:16:37.447286  980156 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:16:37.447356  980156 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:16:37.447446  980156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:16:37.457220  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:16:37.480280  980156 start.go:296] duration metric: took 123.425396ms for postStartSetup
	I0120 12:16:37.480318  980156 fix.go:56] duration metric: took 20.068081947s for fixHost
	I0120 12:16:37.480342  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:37.483038  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.483377  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.483398  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.483597  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:37.483786  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.483973  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.484105  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:37.484289  980156 main.go:141] libmachine: Using SSH client type: native
	I0120 12:16:37.484481  980156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0120 12:16:37.484495  980156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:16:37.594329  980156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375397.572031746
	
	I0120 12:16:37.594359  980156 fix.go:216] guest clock: 1737375397.572031746
	I0120 12:16:37.594382  980156 fix.go:229] Guest: 2025-01-20 12:16:37.572031746 +0000 UTC Remote: 2025-01-20 12:16:37.480324519 +0000 UTC m=+32.972610009 (delta=91.707227ms)
	I0120 12:16:37.594435  980156 fix.go:200] guest clock delta is within tolerance: 91.707227ms
	I0120 12:16:37.594445  980156 start.go:83] releasing machines lock for "test-preload-013266", held for 20.182230808s
	I0120 12:16:37.594474  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:37.594749  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetIP
	I0120 12:16:37.597436  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.597845  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.597872  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.598011  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:37.598483  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:37.598665  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:16:37.598744  980156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:16:37.598783  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:37.598886  980156 ssh_runner.go:195] Run: cat /version.json
	I0120 12:16:37.598913  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:16:37.601399  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.601462  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.601777  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.601807  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.601838  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:37.601850  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:37.601947  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:37.602054  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:16:37.602130  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.602205  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:16:37.602243  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:37.602360  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:16:37.602414  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:16:37.602502  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:16:37.706156  980156 ssh_runner.go:195] Run: systemctl --version
	I0120 12:16:37.711778  980156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:16:37.849082  980156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:16:37.855123  980156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:16:37.855194  980156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:16:37.870025  980156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:16:37.870048  980156 start.go:495] detecting cgroup driver to use...
	I0120 12:16:37.870124  980156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:16:37.885727  980156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:16:37.899920  980156 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:16:37.899990  980156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:16:37.913137  980156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:16:37.925639  980156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:16:38.028236  980156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:16:38.182417  980156 docker.go:233] disabling docker service ...
	I0120 12:16:38.182500  980156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:16:38.195900  980156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:16:38.208423  980156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:16:38.321563  980156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:16:38.427987  980156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:16:38.440855  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:16:38.457089  980156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0120 12:16:38.457147  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.466578  980156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:16:38.466628  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.475974  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.485296  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.494541  980156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:16:38.504275  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.514262  980156 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.529590  980156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:16:38.539476  980156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:16:38.548513  980156 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:16:38.548563  980156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:16:38.567720  980156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:16:38.580376  980156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:16:38.694672  980156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:16:38.778812  980156 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:16:38.778902  980156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:16:38.783188  980156 start.go:563] Will wait 60s for crictl version
	I0120 12:16:38.783247  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:38.786767  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:16:38.825375  980156 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:16:38.825467  980156 ssh_runner.go:195] Run: crio --version
	I0120 12:16:38.851659  980156 ssh_runner.go:195] Run: crio --version
	I0120 12:16:38.885344  980156 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0120 12:16:38.886783  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetIP
	I0120 12:16:38.889313  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:38.889699  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:16:38.889727  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:16:38.889965  980156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:16:38.893977  980156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:16:38.905745  980156 kubeadm.go:883] updating cluster {Name:test-preload-013266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-013266 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:16:38.905854  980156 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 12:16:38.905895  980156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:16:38.942111  980156 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 12:16:38.942189  980156 ssh_runner.go:195] Run: which lz4
	I0120 12:16:38.945716  980156 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:16:38.949530  980156 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:16:38.949562  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0120 12:16:40.241056  980156 crio.go:462] duration metric: took 1.295357737s to copy over tarball
	I0120 12:16:40.241150  980156 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:16:42.510280  980156 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269101195s)
	I0120 12:16:42.510314  980156 crio.go:469] duration metric: took 2.269222607s to extract the tarball
	I0120 12:16:42.510322  980156 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:16:42.549787  980156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:16:42.589207  980156 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 12:16:42.589234  980156 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:16:42.589338  980156 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:16:42.589346  980156 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:42.589389  980156 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:42.589404  980156 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:42.589421  980156 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:42.589456  980156 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0120 12:16:42.589524  980156 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:42.589433  980156 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:42.591111  980156 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:42.591122  980156 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0120 12:16:42.591130  980156 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:42.591157  980156 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:42.591186  980156 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:16:42.591196  980156 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:42.591204  980156 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:42.591241  980156 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:42.798719  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:42.802042  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:42.803617  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0120 12:16:42.814694  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:42.815680  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:42.823483  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:42.836232  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:42.885647  980156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0120 12:16:42.885695  980156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:42.885736  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.896820  980156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0120 12:16:42.896854  980156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:42.896891  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.931361  980156 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0120 12:16:42.931408  980156 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0120 12:16:42.931450  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.938336  980156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0120 12:16:42.938375  980156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:42.938413  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.955474  980156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0120 12:16:42.955517  980156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:42.955560  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.959627  980156 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0120 12:16:42.959660  980156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:42.959695  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.959724  980156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0120 12:16:42.959765  980156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:42.959821  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:42.959848  980156 ssh_runner.go:195] Run: which crictl
	I0120 12:16:42.959911  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:42.959933  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:16:42.959990  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:42.960003  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:42.963441  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:43.029994  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:43.030028  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:43.080903  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:43.081047  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:43.116887  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:43.116918  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:16:43.116964  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:43.140049  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:43.150719  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:16:43.196645  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 12:16:43.196671  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 12:16:43.267542  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 12:16:43.267648  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:16:43.268073  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:16:43.288559  980156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 12:16:43.288651  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0120 12:16:43.288765  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0120 12:16:43.317093  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0120 12:16:43.317183  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0120 12:16:43.317205  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 12:16:43.317280  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 12:16:43.378625  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0120 12:16:43.378733  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 12:16:43.388230  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0120 12:16:43.388248  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0120 12:16:43.388259  980156 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0120 12:16:43.388275  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0120 12:16:43.388299  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0120 12:16:43.388314  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0120 12:16:43.388340  980156 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0120 12:16:43.388367  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0120 12:16:43.388389  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0120 12:16:43.388412  980156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 12:16:43.388418  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0120 12:16:43.388449  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0120 12:16:43.392386  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0120 12:16:43.397720  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0120 12:16:43.397816  980156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0120 12:16:43.794320  980156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:16:46.343171  980156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.954844426s)
	I0120 12:16:46.343208  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0120 12:16:46.343234  980156 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 12:16:46.343172  980156 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.548809659s)
	I0120 12:16:46.343286  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 12:16:46.984707  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0120 12:16:46.984751  980156 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 12:16:46.984820  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 12:16:47.828330  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0120 12:16:47.828382  980156 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 12:16:47.828439  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 12:16:48.568454  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0120 12:16:48.568497  980156 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0120 12:16:48.568543  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0120 12:16:50.616393  980156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.047824597s)
	I0120 12:16:50.616428  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0120 12:16:50.616459  980156 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0120 12:16:50.616583  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0120 12:16:50.760209  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0120 12:16:50.760284  980156 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 12:16:50.760347  980156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 12:16:51.201739  980156 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0120 12:16:51.201783  980156 cache_images.go:123] Successfully loaded all cached images
	I0120 12:16:51.201789  980156 cache_images.go:92] duration metric: took 8.612543069s to LoadCachedImages
	I0120 12:16:51.201803  980156 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.24.4 crio true true} ...
	I0120 12:16:51.201909  980156 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-013266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-013266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:16:51.201976  980156 ssh_runner.go:195] Run: crio config
	I0120 12:16:51.252120  980156 cni.go:84] Creating CNI manager for ""
	I0120 12:16:51.252141  980156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:16:51.252152  980156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:16:51.252172  980156 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-013266 NodeName:test-preload-013266 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:16:51.252298  980156 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-013266"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:16:51.252359  980156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0120 12:16:51.261353  980156 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:16:51.261420  980156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:16:51.270512  980156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0120 12:16:51.285973  980156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:16:51.300331  980156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0120 12:16:51.316305  980156 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I0120 12:16:51.319848  980156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:16:51.330580  980156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:16:51.444077  980156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:16:51.464878  980156 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266 for IP: 192.168.39.82
	I0120 12:16:51.464906  980156 certs.go:194] generating shared ca certs ...
	I0120 12:16:51.464930  980156 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:16:51.465133  980156 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:16:51.465184  980156 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:16:51.465192  980156 certs.go:256] generating profile certs ...
	I0120 12:16:51.465308  980156 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/client.key
	I0120 12:16:51.465379  980156 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/apiserver.key.6a5added
	I0120 12:16:51.465437  980156 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/proxy-client.key
	I0120 12:16:51.465624  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:16:51.465672  980156 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:16:51.465686  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:16:51.465720  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:16:51.465751  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:16:51.465783  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:16:51.465848  980156 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:16:51.466750  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:16:51.489247  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:16:51.515730  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:16:51.560227  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:16:51.591065  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 12:16:51.625890  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:16:51.651299  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:16:51.684072  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:16:51.705201  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:16:51.727054  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:16:51.748650  980156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:16:51.769574  980156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:16:51.784221  980156 ssh_runner.go:195] Run: openssl version
	I0120 12:16:51.789412  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:16:51.798645  980156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:16:51.802621  980156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:16:51.802661  980156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:16:51.807885  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:16:51.817157  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:16:51.826506  980156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:16:51.830596  980156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:16:51.830644  980156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:16:51.835731  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:16:51.844924  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:16:51.854195  980156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:16:51.858228  980156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:16:51.858283  980156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:16:51.863331  980156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:16:51.872496  980156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:16:51.876633  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:16:51.881728  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:16:51.886829  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:16:51.892214  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:16:51.897425  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:16:51.902624  980156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:16:51.907809  980156 kubeadm.go:392] StartCluster: {Name:test-preload-013266 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-013266 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:16:51.907884  980156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:16:51.907924  980156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:16:51.945304  980156 cri.go:89] found id: ""
	I0120 12:16:51.945360  980156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:16:51.954114  980156 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:16:51.954136  980156 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:16:51.954176  980156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:16:51.963506  980156 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:16:51.963989  980156 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-013266" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:16:51.964096  980156 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-013266" cluster setting kubeconfig missing "test-preload-013266" context setting]
	I0120 12:16:51.964409  980156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:16:51.965050  980156 kapi.go:59] client config for test-preload-013266: &rest.Config{Host:"https://192.168.39.82:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/client.crt", KeyFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/client.key", CAFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 12:16:51.965790  980156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:16:51.974716  980156 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.82
	I0120 12:16:51.974750  980156 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:16:51.974767  980156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:16:51.974812  980156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:16:52.007696  980156 cri.go:89] found id: ""
	I0120 12:16:52.007753  980156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:16:52.022124  980156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:16:52.030156  980156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:16:52.030172  980156 kubeadm.go:157] found existing configuration files:
	
	I0120 12:16:52.030208  980156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:16:52.037827  980156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:16:52.037866  980156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:16:52.045775  980156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:16:52.053406  980156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:16:52.053450  980156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:16:52.061525  980156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:16:52.069170  980156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:16:52.069204  980156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:16:52.077199  980156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:16:52.085089  980156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:16:52.085144  980156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:16:52.094043  980156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:16:52.102487  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:52.192677  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:52.998395  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:53.236741  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:53.299118  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:53.380657  980156 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:16:53.380759  980156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:16:53.881609  980156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:16:54.380813  980156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:16:54.397820  980156 api_server.go:72] duration metric: took 1.017159336s to wait for apiserver process to appear ...
	I0120 12:16:54.397846  980156 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:16:54.397865  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:54.398363  980156 api_server.go:269] stopped: https://192.168.39.82:8443/healthz: Get "https://192.168.39.82:8443/healthz": dial tcp 192.168.39.82:8443: connect: connection refused
	I0120 12:16:54.898191  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:57.999180  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:16:57.999210  980156 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:16:57.999226  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:58.012840  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:16:58.012864  980156 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:16:58.398500  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:58.402983  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:16:58.403016  980156 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:16:58.898690  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:58.903616  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:16:58.903640  980156 api_server.go:103] status: https://192.168.39.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:16:59.398171  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:16:59.406157  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I0120 12:16:59.417196  980156 api_server.go:141] control plane version: v1.24.4
	I0120 12:16:59.417221  980156 api_server.go:131] duration metric: took 5.019368592s to wait for apiserver health ...
	I0120 12:16:59.417230  980156 cni.go:84] Creating CNI manager for ""
	I0120 12:16:59.417236  980156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:16:59.419194  980156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:16:59.420585  980156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:16:59.465639  980156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:16:59.494031  980156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:16:59.494160  980156 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0120 12:16:59.494186  980156 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0120 12:16:59.510059  980156 system_pods.go:59] 7 kube-system pods found
	I0120 12:16:59.510095  980156 system_pods.go:61] "coredns-6d4b75cb6d-4hlhp" [fd40aff5-bee9-43ec-ad77-93bbb0c9b394] Running
	I0120 12:16:59.510113  980156 system_pods.go:61] "etcd-test-preload-013266" [dd9e92b1-7a25-4f10-9c53-4d0c78c6e536] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:16:59.510122  980156 system_pods.go:61] "kube-apiserver-test-preload-013266" [c7d50c8f-daeb-4d78-aa87-16ff18adfc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:16:59.510128  980156 system_pods.go:61] "kube-controller-manager-test-preload-013266" [c66fb48a-4082-49a3-9e25-e12cda47a26e] Running
	I0120 12:16:59.510136  980156 system_pods.go:61] "kube-proxy-dxzqp" [cca033fc-d616-4857-b0ae-6612d550a26f] Running
	I0120 12:16:59.510141  980156 system_pods.go:61] "kube-scheduler-test-preload-013266" [84cd9be1-c72c-400d-9885-080169f14852] Running
	I0120 12:16:59.510149  980156 system_pods.go:61] "storage-provisioner" [3d84cff8-c201-41a1-9bed-b36e2e017aa8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:16:59.510165  980156 system_pods.go:74] duration metric: took 16.100066ms to wait for pod list to return data ...
	I0120 12:16:59.510179  980156 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:16:59.514048  980156 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:16:59.514077  980156 node_conditions.go:123] node cpu capacity is 2
	I0120 12:16:59.514090  980156 node_conditions.go:105] duration metric: took 3.905421ms to run NodePressure ...
	I0120 12:16:59.514120  980156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:16:59.737126  980156 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:16:59.743223  980156 kubeadm.go:739] kubelet initialised
	I0120 12:16:59.743248  980156 kubeadm.go:740] duration metric: took 6.091288ms waiting for restarted kubelet to initialise ...
	I0120 12:16:59.743260  980156 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:16:59.747503  980156 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace to be "Ready" ...
	I0120 12:16:59.754279  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.754302  980156 pod_ready.go:82] duration metric: took 6.77384ms for pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace to be "Ready" ...
	E0120 12:16:59.754312  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.754320  980156 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:16:59.758143  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "etcd-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.758162  980156 pod_ready.go:82] duration metric: took 3.833347ms for pod "etcd-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	E0120 12:16:59.758170  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "etcd-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.758175  980156 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:16:59.762870  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "kube-apiserver-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.762892  980156 pod_ready.go:82] duration metric: took 4.710721ms for pod "kube-apiserver-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	E0120 12:16:59.762900  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "kube-apiserver-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.762907  980156 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:16:59.898131  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.898170  980156 pod_ready.go:82] duration metric: took 135.251905ms for pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	E0120 12:16:59.898185  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:16:59.898197  980156 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dxzqp" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:00.297732  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "kube-proxy-dxzqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:00.297764  980156 pod_ready.go:82] duration metric: took 399.556686ms for pod "kube-proxy-dxzqp" in "kube-system" namespace to be "Ready" ...
	E0120 12:17:00.297773  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "kube-proxy-dxzqp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:00.297779  980156 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:00.697759  980156 pod_ready.go:98] node "test-preload-013266" hosting pod "kube-scheduler-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:00.697788  980156 pod_ready.go:82] duration metric: took 400.002712ms for pod "kube-scheduler-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	E0120 12:17:00.697798  980156 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-013266" hosting pod "kube-scheduler-test-preload-013266" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:00.697806  980156 pod_ready.go:39] duration metric: took 954.535198ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:17:00.697825  980156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:17:00.708784  980156 ops.go:34] apiserver oom_adj: -16
	I0120 12:17:00.708802  980156 kubeadm.go:597] duration metric: took 8.754659592s to restartPrimaryControlPlane
	I0120 12:17:00.708810  980156 kubeadm.go:394] duration metric: took 8.801006581s to StartCluster
	I0120 12:17:00.708826  980156 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:17:00.708905  980156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:17:00.709552  980156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:17:00.709784  980156 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:17:00.709943  980156 config.go:182] Loaded profile config "test-preload-013266": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 12:17:00.709921  980156 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:17:00.710032  980156 addons.go:69] Setting storage-provisioner=true in profile "test-preload-013266"
	I0120 12:17:00.710050  980156 addons.go:69] Setting default-storageclass=true in profile "test-preload-013266"
	I0120 12:17:00.710059  980156 addons.go:238] Setting addon storage-provisioner=true in "test-preload-013266"
	I0120 12:17:00.710068  980156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-013266"
	W0120 12:17:00.710071  980156 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:17:00.710140  980156 host.go:66] Checking if "test-preload-013266" exists ...
	I0120 12:17:00.710370  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:17:00.710420  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:17:00.710589  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:17:00.710630  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:17:00.711438  980156 out.go:177] * Verifying Kubernetes components...
	I0120 12:17:00.712963  980156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:17:00.725995  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0120 12:17:00.726560  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:17:00.727170  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:17:00.727195  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:17:00.727576  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:17:00.728193  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:17:00.728249  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:17:00.729926  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0120 12:17:00.730481  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:17:00.731065  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:17:00.731108  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:17:00.731507  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:17:00.731739  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetState
	I0120 12:17:00.734192  980156 kapi.go:59] client config for test-preload-013266: &rest.Config{Host:"https://192.168.39.82:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/client.crt", KeyFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/profiles/test-preload-013266/client.key", CAFile:"/home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 12:17:00.734606  980156 addons.go:238] Setting addon default-storageclass=true in "test-preload-013266"
	W0120 12:17:00.734628  980156 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:17:00.734661  980156 host.go:66] Checking if "test-preload-013266" exists ...
	I0120 12:17:00.735027  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:17:00.735069  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:17:00.744079  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0120 12:17:00.744454  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:17:00.744914  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:17:00.744935  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:17:00.745266  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:17:00.745483  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetState
	I0120 12:17:00.747016  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:17:00.748583  980156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:17:00.749821  980156 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:17:00.749838  980156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:17:00.749853  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:17:00.750258  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0120 12:17:00.750705  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:17:00.751252  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:17:00.751276  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:17:00.751617  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:17:00.752198  980156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:17:00.752250  980156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:17:00.752932  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:17:00.753307  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:17:00.753341  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:17:00.753484  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:17:00.753642  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:17:00.753765  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:17:00.753867  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:17:00.797555  980156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0120 12:17:00.797953  980156 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:17:00.798459  980156 main.go:141] libmachine: Using API Version  1
	I0120 12:17:00.798487  980156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:17:00.799032  980156 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:17:00.799266  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetState
	I0120 12:17:00.801081  980156 main.go:141] libmachine: (test-preload-013266) Calling .DriverName
	I0120 12:17:00.801299  980156 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:17:00.801317  980156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:17:00.801344  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHHostname
	I0120 12:17:00.804218  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:17:00.804641  980156 main.go:141] libmachine: (test-preload-013266) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:10:44", ip: ""} in network mk-test-preload-013266: {Iface:virbr1 ExpiryTime:2025-01-20 13:16:28 +0000 UTC Type:0 Mac:52:54:00:9a:10:44 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:test-preload-013266 Clientid:01:52:54:00:9a:10:44}
	I0120 12:17:00.804675  980156 main.go:141] libmachine: (test-preload-013266) DBG | domain test-preload-013266 has defined IP address 192.168.39.82 and MAC address 52:54:00:9a:10:44 in network mk-test-preload-013266
	I0120 12:17:00.804861  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHPort
	I0120 12:17:00.805060  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHKeyPath
	I0120 12:17:00.805208  980156 main.go:141] libmachine: (test-preload-013266) Calling .GetSSHUsername
	I0120 12:17:00.805335  980156 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/test-preload-013266/id_rsa Username:docker}
	I0120 12:17:00.884793  980156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:17:00.900910  980156 node_ready.go:35] waiting up to 6m0s for node "test-preload-013266" to be "Ready" ...
	I0120 12:17:00.967047  980156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:17:01.034904  980156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:17:01.913649  980156 main.go:141] libmachine: Making call to close driver server
	I0120 12:17:01.913677  980156 main.go:141] libmachine: (test-preload-013266) Calling .Close
	I0120 12:17:01.913972  980156 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:17:01.913986  980156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:17:01.913996  980156 main.go:141] libmachine: Making call to close driver server
	I0120 12:17:01.914018  980156 main.go:141] libmachine: (test-preload-013266) DBG | Closing plugin on server side
	I0120 12:17:01.914055  980156 main.go:141] libmachine: (test-preload-013266) Calling .Close
	I0120 12:17:01.914405  980156 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:17:01.914425  980156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:17:01.914445  980156 main.go:141] libmachine: (test-preload-013266) DBG | Closing plugin on server side
	I0120 12:17:01.919229  980156 main.go:141] libmachine: Making call to close driver server
	I0120 12:17:01.919245  980156 main.go:141] libmachine: (test-preload-013266) Calling .Close
	I0120 12:17:01.919484  980156 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:17:01.919537  980156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:17:01.944396  980156 main.go:141] libmachine: Making call to close driver server
	I0120 12:17:01.944414  980156 main.go:141] libmachine: (test-preload-013266) Calling .Close
	I0120 12:17:01.944694  980156 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:17:01.944714  980156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:17:01.944724  980156 main.go:141] libmachine: Making call to close driver server
	I0120 12:17:01.944732  980156 main.go:141] libmachine: (test-preload-013266) Calling .Close
	I0120 12:17:01.944737  980156 main.go:141] libmachine: (test-preload-013266) DBG | Closing plugin on server side
	I0120 12:17:01.944983  980156 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:17:01.945000  980156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:17:01.946759  980156 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0120 12:17:01.947980  980156 addons.go:514] duration metric: took 1.238070116s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0120 12:17:02.904743  980156 node_ready.go:53] node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:04.906588  980156 node_ready.go:53] node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:07.404657  980156 node_ready.go:53] node "test-preload-013266" has status "Ready":"False"
	I0120 12:17:08.404395  980156 node_ready.go:49] node "test-preload-013266" has status "Ready":"True"
	I0120 12:17:08.404420  980156 node_ready.go:38] duration metric: took 7.503477683s for node "test-preload-013266" to be "Ready" ...
	I0120 12:17:08.404431  980156 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:17:08.408945  980156 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.413961  980156 pod_ready.go:93] pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:08.413981  980156 pod_ready.go:82] duration metric: took 5.015645ms for pod "coredns-6d4b75cb6d-4hlhp" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.413989  980156 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.417899  980156 pod_ready.go:93] pod "etcd-test-preload-013266" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:08.417922  980156 pod_ready.go:82] duration metric: took 3.926081ms for pod "etcd-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.417933  980156 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.422069  980156 pod_ready.go:93] pod "kube-apiserver-test-preload-013266" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:08.422090  980156 pod_ready.go:82] duration metric: took 4.145761ms for pod "kube-apiserver-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.422100  980156 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.928580  980156 pod_ready.go:93] pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:08.928605  980156 pod_ready.go:82] duration metric: took 506.494607ms for pod "kube-controller-manager-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:08.928616  980156 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dxzqp" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:09.205084  980156 pod_ready.go:93] pod "kube-proxy-dxzqp" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:09.205111  980156 pod_ready.go:82] duration metric: took 276.488822ms for pod "kube-proxy-dxzqp" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:09.205122  980156 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:11.211028  980156 pod_ready.go:103] pod "kube-scheduler-test-preload-013266" in "kube-system" namespace has status "Ready":"False"
	I0120 12:17:13.713992  980156 pod_ready.go:93] pod "kube-scheduler-test-preload-013266" in "kube-system" namespace has status "Ready":"True"
	I0120 12:17:13.714017  980156 pod_ready.go:82] duration metric: took 4.508887339s for pod "kube-scheduler-test-preload-013266" in "kube-system" namespace to be "Ready" ...
	I0120 12:17:13.714027  980156 pod_ready.go:39] duration metric: took 5.309586522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:17:13.714043  980156 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:17:13.714124  980156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:17:13.730196  980156 api_server.go:72] duration metric: took 13.020364136s to wait for apiserver process to appear ...
	I0120 12:17:13.730215  980156 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:17:13.730231  980156 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0120 12:17:13.736019  980156 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I0120 12:17:13.736931  980156 api_server.go:141] control plane version: v1.24.4
	I0120 12:17:13.736956  980156 api_server.go:131] duration metric: took 6.734266ms to wait for apiserver health ...
	I0120 12:17:13.736967  980156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:17:13.742539  980156 system_pods.go:59] 7 kube-system pods found
	I0120 12:17:13.742564  980156 system_pods.go:61] "coredns-6d4b75cb6d-4hlhp" [fd40aff5-bee9-43ec-ad77-93bbb0c9b394] Running
	I0120 12:17:13.742570  980156 system_pods.go:61] "etcd-test-preload-013266" [dd9e92b1-7a25-4f10-9c53-4d0c78c6e536] Running
	I0120 12:17:13.742574  980156 system_pods.go:61] "kube-apiserver-test-preload-013266" [c7d50c8f-daeb-4d78-aa87-16ff18adfc07] Running
	I0120 12:17:13.742577  980156 system_pods.go:61] "kube-controller-manager-test-preload-013266" [c66fb48a-4082-49a3-9e25-e12cda47a26e] Running
	I0120 12:17:13.742585  980156 system_pods.go:61] "kube-proxy-dxzqp" [cca033fc-d616-4857-b0ae-6612d550a26f] Running
	I0120 12:17:13.742588  980156 system_pods.go:61] "kube-scheduler-test-preload-013266" [84cd9be1-c72c-400d-9885-080169f14852] Running
	I0120 12:17:13.742592  980156 system_pods.go:61] "storage-provisioner" [3d84cff8-c201-41a1-9bed-b36e2e017aa8] Running
	I0120 12:17:13.742597  980156 system_pods.go:74] duration metric: took 5.624728ms to wait for pod list to return data ...
	I0120 12:17:13.742606  980156 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:17:13.744630  980156 default_sa.go:45] found service account: "default"
	I0120 12:17:13.744648  980156 default_sa.go:55] duration metric: took 2.036992ms for default service account to be created ...
	I0120 12:17:13.744655  980156 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:17:13.808376  980156 system_pods.go:87] 7 kube-system pods found
	I0120 12:17:14.006027  980156 system_pods.go:105] "coredns-6d4b75cb6d-4hlhp" [fd40aff5-bee9-43ec-ad77-93bbb0c9b394] Running
	I0120 12:17:14.006051  980156 system_pods.go:105] "etcd-test-preload-013266" [dd9e92b1-7a25-4f10-9c53-4d0c78c6e536] Running
	I0120 12:17:14.006056  980156 system_pods.go:105] "kube-apiserver-test-preload-013266" [c7d50c8f-daeb-4d78-aa87-16ff18adfc07] Running
	I0120 12:17:14.006063  980156 system_pods.go:105] "kube-controller-manager-test-preload-013266" [c66fb48a-4082-49a3-9e25-e12cda47a26e] Running
	I0120 12:17:14.006067  980156 system_pods.go:105] "kube-proxy-dxzqp" [cca033fc-d616-4857-b0ae-6612d550a26f] Running
	I0120 12:17:14.006072  980156 system_pods.go:105] "kube-scheduler-test-preload-013266" [84cd9be1-c72c-400d-9885-080169f14852] Running
	I0120 12:17:14.006076  980156 system_pods.go:105] "storage-provisioner" [3d84cff8-c201-41a1-9bed-b36e2e017aa8] Running
	I0120 12:17:14.006085  980156 system_pods.go:147] duration metric: took 261.422251ms to wait for k8s-apps to be running ...
	I0120 12:17:14.006104  980156 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:17:14.006165  980156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:17:14.023088  980156 system_svc.go:56] duration metric: took 16.986129ms WaitForService to wait for kubelet
	I0120 12:17:14.023128  980156 kubeadm.go:582] duration metric: took 13.313299921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:17:14.023147  980156 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:17:14.205823  980156 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:17:14.205853  980156 node_conditions.go:123] node cpu capacity is 2
	I0120 12:17:14.205865  980156 node_conditions.go:105] duration metric: took 182.713721ms to run NodePressure ...
	I0120 12:17:14.205877  980156 start.go:241] waiting for startup goroutines ...
	I0120 12:17:14.205884  980156 start.go:246] waiting for cluster config update ...
	I0120 12:17:14.205895  980156 start.go:255] writing updated cluster config ...
	I0120 12:17:14.206229  980156 ssh_runner.go:195] Run: rm -f paused
	I0120 12:17:14.256866  980156 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0120 12:17:14.258697  980156 out.go:201] 
	W0120 12:17:14.259983  980156 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0120 12:17:14.261068  980156 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0120 12:17:14.262086  980156 out.go:177] * Done! kubectl is now configured to use "test-preload-013266" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.155371471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737375435155354907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=824ab6c0-0c69-4eb1-8f52-5b47a12dda42 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.156073090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70bd72ba-173f-4c82-819e-f557fa720a05 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.156151655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70bd72ba-173f-4c82-819e-f557fa720a05 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.156307018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8caa770c222f9551e91a882062a44243d669c8f618c0ea2f37406ac80b677c5,PodSandboxId:3f122a502d5f3f0351d145f8aa3e3ee7a5864a19b0342f8861e7c5eb4f27b588,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737375426448129366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4hlhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd40aff5-bee9-43ec-ad77-93bbb0c9b394,},Annotations:map[string]string{io.kubernetes.container.hash: b29324f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698b1b3227754d8c1202f74b15891870af727cbe5934529565cfe4b8f9870094,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737375420528340570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3d84cff8-c201-41a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d35c538f4ce9ac8cb8ae0267d719995acaeddd1a5f648af7314ce49ea6af6fa,PodSandboxId:f74d18719e259122af8fb35d020f541ee5cd75881c1e6efa50264e5c94c36f45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737375419397487669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc
a033fc-d616-4857-b0ae-6612d550a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 20a79c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737375419349635313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d84cff8-c201-4
1a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1dd627b2c3c71cf1ede0e7db4ae8d6e0a44ae5d394f2a071c53ef99cc7858f6,PodSandboxId:d935586e0b61feb206e7c824b84789cc170bebea7708e5019d526d0fd60b08b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737375414083730525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f75a212f09c8df3110c93
943840929c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d5269aae0d99d3bd39ee70c1c560601d44a3483a9cc3e8be1acb82251381e,PodSandboxId:f4f175259d855901cacbc6477de2714e1a4f6cd7b93f692706ca01f8d72a77a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737375414107751009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98d97171f0b92d901fc20d51c6d3d53,},Annotations:map[string]strin
g{io.kubernetes.container.hash: a5faaf4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4495b5f853e8471faa101989ad454281b7f852981d892c46f0be75b0de0e4887,PodSandboxId:8adfc3c9fc9bbb8409e885d52c45bcf7061cb27f8998e9799a59200116cffad8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737375414053710229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e747943d43d4e38365ecf41971dd9957,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2996e3fd9d7f13cce983745b752a2f1673c2f750accc14c5954647e841517e8,PodSandboxId:17672c2add1e1221c183951c8a1c18ea549f2e490eaeca39bb046e786077e0e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737375414035713934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e943c2848b0d42b0ddd3ba2ab00ae7,},Annotations:map[string]
string{io.kubernetes.container.hash: beb8962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70bd72ba-173f-4c82-819e-f557fa720a05 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.188239574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b85deea-ea0d-4528-bcfc-ac74a572dc82 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.188295701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b85deea-ea0d-4528-bcfc-ac74a572dc82 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.189420318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e589cbb-805e-494f-8e19-0ccbf9e074dd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.190059535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737375435190039620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e589cbb-805e-494f-8e19-0ccbf9e074dd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.190486868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6f229a6-244a-4f2b-aad3-0cba7774d93d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.190532655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6f229a6-244a-4f2b-aad3-0cba7774d93d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.190883212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8caa770c222f9551e91a882062a44243d669c8f618c0ea2f37406ac80b677c5,PodSandboxId:3f122a502d5f3f0351d145f8aa3e3ee7a5864a19b0342f8861e7c5eb4f27b588,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737375426448129366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4hlhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd40aff5-bee9-43ec-ad77-93bbb0c9b394,},Annotations:map[string]string{io.kubernetes.container.hash: b29324f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698b1b3227754d8c1202f74b15891870af727cbe5934529565cfe4b8f9870094,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737375420528340570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3d84cff8-c201-41a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d35c538f4ce9ac8cb8ae0267d719995acaeddd1a5f648af7314ce49ea6af6fa,PodSandboxId:f74d18719e259122af8fb35d020f541ee5cd75881c1e6efa50264e5c94c36f45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737375419397487669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc
a033fc-d616-4857-b0ae-6612d550a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 20a79c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737375419349635313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d84cff8-c201-4
1a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1dd627b2c3c71cf1ede0e7db4ae8d6e0a44ae5d394f2a071c53ef99cc7858f6,PodSandboxId:d935586e0b61feb206e7c824b84789cc170bebea7708e5019d526d0fd60b08b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737375414083730525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f75a212f09c8df3110c93
943840929c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d5269aae0d99d3bd39ee70c1c560601d44a3483a9cc3e8be1acb82251381e,PodSandboxId:f4f175259d855901cacbc6477de2714e1a4f6cd7b93f692706ca01f8d72a77a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737375414107751009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98d97171f0b92d901fc20d51c6d3d53,},Annotations:map[string]strin
g{io.kubernetes.container.hash: a5faaf4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4495b5f853e8471faa101989ad454281b7f852981d892c46f0be75b0de0e4887,PodSandboxId:8adfc3c9fc9bbb8409e885d52c45bcf7061cb27f8998e9799a59200116cffad8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737375414053710229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e747943d43d4e38365ecf41971dd9957,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2996e3fd9d7f13cce983745b752a2f1673c2f750accc14c5954647e841517e8,PodSandboxId:17672c2add1e1221c183951c8a1c18ea549f2e490eaeca39bb046e786077e0e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737375414035713934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e943c2848b0d42b0ddd3ba2ab00ae7,},Annotations:map[string]
string{io.kubernetes.container.hash: beb8962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6f229a6-244a-4f2b-aad3-0cba7774d93d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.223636190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15052455-b22e-4f73-80ea-f8722f79e0f2 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.223710146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15052455-b22e-4f73-80ea-f8722f79e0f2 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.224615328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0404afc-f9a1-410c-8997-9de68234fe24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.225066104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737375435225048105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0404afc-f9a1-410c-8997-9de68234fe24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.225417719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=247f6b3d-c200-4b59-af86-259fe8fe5c17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.225492129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=247f6b3d-c200-4b59-af86-259fe8fe5c17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.225663916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8caa770c222f9551e91a882062a44243d669c8f618c0ea2f37406ac80b677c5,PodSandboxId:3f122a502d5f3f0351d145f8aa3e3ee7a5864a19b0342f8861e7c5eb4f27b588,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737375426448129366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4hlhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd40aff5-bee9-43ec-ad77-93bbb0c9b394,},Annotations:map[string]string{io.kubernetes.container.hash: b29324f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698b1b3227754d8c1202f74b15891870af727cbe5934529565cfe4b8f9870094,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737375420528340570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3d84cff8-c201-41a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d35c538f4ce9ac8cb8ae0267d719995acaeddd1a5f648af7314ce49ea6af6fa,PodSandboxId:f74d18719e259122af8fb35d020f541ee5cd75881c1e6efa50264e5c94c36f45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737375419397487669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc
a033fc-d616-4857-b0ae-6612d550a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 20a79c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737375419349635313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d84cff8-c201-4
1a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1dd627b2c3c71cf1ede0e7db4ae8d6e0a44ae5d394f2a071c53ef99cc7858f6,PodSandboxId:d935586e0b61feb206e7c824b84789cc170bebea7708e5019d526d0fd60b08b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737375414083730525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f75a212f09c8df3110c93
943840929c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d5269aae0d99d3bd39ee70c1c560601d44a3483a9cc3e8be1acb82251381e,PodSandboxId:f4f175259d855901cacbc6477de2714e1a4f6cd7b93f692706ca01f8d72a77a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737375414107751009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98d97171f0b92d901fc20d51c6d3d53,},Annotations:map[string]strin
g{io.kubernetes.container.hash: a5faaf4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4495b5f853e8471faa101989ad454281b7f852981d892c46f0be75b0de0e4887,PodSandboxId:8adfc3c9fc9bbb8409e885d52c45bcf7061cb27f8998e9799a59200116cffad8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737375414053710229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e747943d43d4e38365ecf41971dd9957,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2996e3fd9d7f13cce983745b752a2f1673c2f750accc14c5954647e841517e8,PodSandboxId:17672c2add1e1221c183951c8a1c18ea549f2e490eaeca39bb046e786077e0e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737375414035713934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e943c2848b0d42b0ddd3ba2ab00ae7,},Annotations:map[string]
string{io.kubernetes.container.hash: beb8962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=247f6b3d-c200-4b59-af86-259fe8fe5c17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.254052170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54b51c72-fe9d-4a27-af3b-37bee8d3e1cf name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.254122619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54b51c72-fe9d-4a27-af3b-37bee8d3e1cf name=/runtime.v1.RuntimeService/Version
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.255252200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7724807-ffe3-4442-ac15-f9b8f90a3107 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.255629391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737375435255611054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7724807-ffe3-4442-ac15-f9b8f90a3107 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.256193263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdda00c1-1b57-445a-bb20-46b0df8fbd07 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.256270323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdda00c1-1b57-445a-bb20-46b0df8fbd07 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:17:15 test-preload-013266 crio[671]: time="2025-01-20 12:17:15.256447497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8caa770c222f9551e91a882062a44243d669c8f618c0ea2f37406ac80b677c5,PodSandboxId:3f122a502d5f3f0351d145f8aa3e3ee7a5864a19b0342f8861e7c5eb4f27b588,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737375426448129366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4hlhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd40aff5-bee9-43ec-ad77-93bbb0c9b394,},Annotations:map[string]string{io.kubernetes.container.hash: b29324f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698b1b3227754d8c1202f74b15891870af727cbe5934529565cfe4b8f9870094,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737375420528340570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3d84cff8-c201-41a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d35c538f4ce9ac8cb8ae0267d719995acaeddd1a5f648af7314ce49ea6af6fa,PodSandboxId:f74d18719e259122af8fb35d020f541ee5cd75881c1e6efa50264e5c94c36f45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737375419397487669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxzqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc
a033fc-d616-4857-b0ae-6612d550a26f,},Annotations:map[string]string{io.kubernetes.container.hash: 20a79c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c,PodSandboxId:ea4fd3c48cee7e008dd18618fa9cc2d6770818c21d86d475d00564f29b1c70bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737375419349635313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d84cff8-c201-4
1a1-9bed-b36e2e017aa8,},Annotations:map[string]string{io.kubernetes.container.hash: f6f495a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1dd627b2c3c71cf1ede0e7db4ae8d6e0a44ae5d394f2a071c53ef99cc7858f6,PodSandboxId:d935586e0b61feb206e7c824b84789cc170bebea7708e5019d526d0fd60b08b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737375414083730525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f75a212f09c8df3110c93
943840929c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4d5269aae0d99d3bd39ee70c1c560601d44a3483a9cc3e8be1acb82251381e,PodSandboxId:f4f175259d855901cacbc6477de2714e1a4f6cd7b93f692706ca01f8d72a77a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737375414107751009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98d97171f0b92d901fc20d51c6d3d53,},Annotations:map[string]strin
g{io.kubernetes.container.hash: a5faaf4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4495b5f853e8471faa101989ad454281b7f852981d892c46f0be75b0de0e4887,PodSandboxId:8adfc3c9fc9bbb8409e885d52c45bcf7061cb27f8998e9799a59200116cffad8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737375414053710229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e747943d43d4e38365ecf41971dd9957,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2996e3fd9d7f13cce983745b752a2f1673c2f750accc14c5954647e841517e8,PodSandboxId:17672c2add1e1221c183951c8a1c18ea549f2e490eaeca39bb046e786077e0e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737375414035713934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-013266,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e943c2848b0d42b0ddd3ba2ab00ae7,},Annotations:map[string]
string{io.kubernetes.container.hash: beb8962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdda00c1-1b57-445a-bb20-46b0df8fbd07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8caa770c222f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   3f122a502d5f3       coredns-6d4b75cb6d-4hlhp
	698b1b3227754       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   ea4fd3c48cee7       storage-provisioner
	2d35c538f4ce9       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   f74d18719e259       kube-proxy-dxzqp
	bb411d8859f6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   ea4fd3c48cee7       storage-provisioner
	7a4d5269aae0d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   f4f175259d855       etcd-test-preload-013266
	c1dd627b2c3c7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   d935586e0b61f       kube-scheduler-test-preload-013266
	4495b5f853e84       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   8adfc3c9fc9bb       kube-controller-manager-test-preload-013266
	c2996e3fd9d7f       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   17672c2add1e1       kube-apiserver-test-preload-013266
	
	
	==> coredns [a8caa770c222f9551e91a882062a44243d669c8f618c0ea2f37406ac80b677c5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47430 - 56879 "HINFO IN 8803799072996173652.4651140284413616577. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0099699s
	
	
	==> describe nodes <==
	Name:               test-preload-013266
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-013266
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=test-preload-013266
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_15_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:15:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-013266
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:17:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:17:08 +0000   Mon, 20 Jan 2025 12:15:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:17:08 +0000   Mon, 20 Jan 2025 12:15:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:17:08 +0000   Mon, 20 Jan 2025 12:15:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:17:08 +0000   Mon, 20 Jan 2025 12:17:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    test-preload-013266
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de12f5e610c4a2290608584f2cf9061
	  System UUID:                2de12f5e-610c-4a22-9060-8584f2cf9061
	  Boot ID:                    dad927e2-d555-4afb-bfa7-fb6725524739
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4hlhp                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     91s
	  kube-system                 etcd-test-preload-013266                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         104s
	  kube-system                 kube-apiserver-test-preload-013266             250m (12%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-test-preload-013266    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-dxzqp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-test-preload-013266             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node test-preload-013266 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node test-preload-013266 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node test-preload-013266 status is now: NodeHasSufficientPID
	  Normal  NodeReady                94s                kubelet          Node test-preload-013266 status is now: NodeReady
	  Normal  RegisteredNode           92s                node-controller  Node test-preload-013266 event: Registered Node test-preload-013266 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-013266 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-013266 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-013266 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-013266 event: Registered Node test-preload-013266 in Controller
	
	
	==> dmesg <==
	[Jan20 12:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051985] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037221] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.943472] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.532963] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.681701] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.053089] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052931] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.174761] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.111638] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.271915] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[ +12.747252] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.057303] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.729874] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +5.531396] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.082422] systemd-fstab-generator[1827]: Ignoring "noauto" option for root device
	[Jan20 12:17] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [7a4d5269aae0d99d3bd39ee70c1c560601d44a3483a9cc3e8be1acb82251381e] <==
	{"level":"info","ts":"2025-01-20T12:16:54.510Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c40e3e084b2d242d","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-20T12:16:54.510Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-20T12:16:54.517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d switched to configuration voters=(14127297286449734701)"}
	{"level":"info","ts":"2025-01-20T12:16:54.517Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b43c45dcb7e17c14","local-member-id":"c40e3e084b2d242d","added-peer-id":"c40e3e084b2d242d","added-peer-peer-urls":["https://192.168.39.82:2380"]}
	{"level":"info","ts":"2025-01-20T12:16:54.517Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b43c45dcb7e17c14","local-member-id":"c40e3e084b2d242d","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T12:16:54.517Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T12:16:54.524Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.82:2380"}
	{"level":"info","ts":"2025-01-20T12:16:54.526Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.82:2380"}
	{"level":"info","ts":"2025-01-20T12:16:54.526Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-20T12:16:54.527Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c40e3e084b2d242d","initial-advertise-peer-urls":["https://192.168.39.82:2380"],"listen-peer-urls":["https://192.168.39.82:2380"],"advertise-client-urls":["https://192.168.39.82:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.82:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-20T12:16:54.527Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d received MsgPreVoteResp from c40e3e084b2d242d at term 2"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d became candidate at term 3"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d received MsgVoteResp from c40e3e084b2d242d at term 3"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c40e3e084b2d242d became leader at term 3"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c40e3e084b2d242d elected leader c40e3e084b2d242d at term 3"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c40e3e084b2d242d","local-member-attributes":"{Name:test-preload-013266 ClientURLs:[https://192.168.39.82:2379]}","request-path":"/0/members/c40e3e084b2d242d/attributes","cluster-id":"b43c45dcb7e17c14","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T12:16:55.741Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T12:16:55.742Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T12:16:55.742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T12:16:55.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T12:16:55.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.82:2379"}
	{"level":"info","ts":"2025-01-20T12:16:55.745Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:17:15 up 0 min,  0 users,  load average: 1.01, 0.29, 0.10
	Linux test-preload-013266 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2996e3fd9d7f13cce983745b752a2f1673c2f750accc14c5954647e841517e8] <==
	I0120 12:16:57.959594       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0120 12:16:57.959625       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0120 12:16:57.959657       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0120 12:16:57.959719       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0120 12:16:57.960275       1 controller.go:83] Starting OpenAPI AggregationController
	I0120 12:16:57.960326       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0120 12:16:58.051583       1 cache.go:39] Caches are synced for autoregister controller
	I0120 12:16:58.052224       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0120 12:16:58.053097       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0120 12:16:58.053144       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0120 12:16:58.053484       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0120 12:16:58.060051       1 apf_controller.go:322] Running API Priority and Fairness config worker
	E0120 12:16:58.062063       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0120 12:16:58.109592       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 12:16:58.122704       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0120 12:16:58.620374       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 12:16:58.962202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0120 12:16:59.645325       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0120 12:16:59.655254       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0120 12:16:59.692829       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0120 12:16:59.713894       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 12:16:59.722589       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0120 12:16:59.784604       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0120 12:17:10.628461       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 12:17:10.646239       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4495b5f853e8471faa101989ad454281b7f852981d892c46f0be75b0de0e4887] <==
	I0120 12:17:10.554124       1 shared_informer.go:262] Caches are synced for ephemeral
	I0120 12:17:10.556353       1 shared_informer.go:262] Caches are synced for taint
	I0120 12:17:10.556453       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0120 12:17:10.556538       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-013266. Assuming now as a timestamp.
	I0120 12:17:10.556579       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0120 12:17:10.557315       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0120 12:17:10.558178       1 event.go:294] "Event occurred" object="test-preload-013266" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-013266 event: Registered Node test-preload-013266 in Controller"
	I0120 12:17:10.559875       1 shared_informer.go:262] Caches are synced for attach detach
	I0120 12:17:10.560980       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0120 12:17:10.566693       1 shared_informer.go:262] Caches are synced for HPA
	I0120 12:17:10.567846       1 shared_informer.go:262] Caches are synced for stateful set
	I0120 12:17:10.568927       1 shared_informer.go:262] Caches are synced for disruption
	I0120 12:17:10.568949       1 disruption.go:371] Sending events to api server.
	I0120 12:17:10.590746       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 12:17:10.603507       1 shared_informer.go:262] Caches are synced for daemon sets
	I0120 12:17:10.604892       1 shared_informer.go:262] Caches are synced for deployment
	I0120 12:17:10.610836       1 shared_informer.go:262] Caches are synced for persistent volume
	I0120 12:17:10.617120       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0120 12:17:10.629238       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0120 12:17:10.629593       1 shared_informer.go:262] Caches are synced for job
	I0120 12:17:10.636502       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 12:17:10.637890       1 shared_informer.go:262] Caches are synced for endpoint
	I0120 12:17:10.999849       1 shared_informer.go:262] Caches are synced for garbage collector
	I0120 12:17:10.999951       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 12:17:11.015266       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [2d35c538f4ce9ac8cb8ae0267d719995acaeddd1a5f648af7314ce49ea6af6fa] <==
	I0120 12:16:59.746291       1 node.go:163] Successfully retrieved node IP: 192.168.39.82
	I0120 12:16:59.746858       1 server_others.go:138] "Detected node IP" address="192.168.39.82"
	I0120 12:16:59.746949       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0120 12:16:59.778570       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0120 12:16:59.778595       1 server_others.go:206] "Using iptables Proxier"
	I0120 12:16:59.778868       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0120 12:16:59.779560       1 server.go:661] "Version info" version="v1.24.4"
	I0120 12:16:59.779585       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:16:59.781332       1 config.go:317] "Starting service config controller"
	I0120 12:16:59.781369       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0120 12:16:59.781392       1 config.go:226] "Starting endpoint slice config controller"
	I0120 12:16:59.781396       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0120 12:16:59.782227       1 config.go:444] "Starting node config controller"
	I0120 12:16:59.782250       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0120 12:16:59.882414       1 shared_informer.go:262] Caches are synced for node config
	I0120 12:16:59.882454       1 shared_informer.go:262] Caches are synced for service config
	I0120 12:16:59.882476       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c1dd627b2c3c71cf1ede0e7db4ae8d6e0a44ae5d394f2a071c53ef99cc7858f6] <==
	I0120 12:16:55.281143       1 serving.go:348] Generated self-signed cert in-memory
	W0120 12:16:58.018888       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 12:16:58.019024       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 12:16:58.019096       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 12:16:58.019126       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 12:16:58.074358       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0120 12:16:58.074480       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:16:58.083724       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0120 12:16:58.083925       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0120 12:16:58.085172       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 12:16:58.083964       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0120 12:16:58.186353       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.341688    1133 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.341854    1133 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.341913    1133 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: E0120 12:16:58.342994    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-4hlhp" podUID=fd40aff5-bee9-43ec-ad77-93bbb0c9b394
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: E0120 12:16:58.399399    1133 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.400707    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume\") pod \"coredns-6d4b75cb6d-4hlhp\" (UID: \"fd40aff5-bee9-43ec-ad77-93bbb0c9b394\") " pod="kube-system/coredns-6d4b75cb6d-4hlhp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.400890    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97tq5\" (UniqueName: \"kubernetes.io/projected/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-kube-api-access-97tq5\") pod \"coredns-6d4b75cb6d-4hlhp\" (UID: \"fd40aff5-bee9-43ec-ad77-93bbb0c9b394\") " pod="kube-system/coredns-6d4b75cb6d-4hlhp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.400997    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dln74\" (UniqueName: \"kubernetes.io/projected/3d84cff8-c201-41a1-9bed-b36e2e017aa8-kube-api-access-dln74\") pod \"storage-provisioner\" (UID: \"3d84cff8-c201-41a1-9bed-b36e2e017aa8\") " pod="kube-system/storage-provisioner"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401088    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cca033fc-d616-4857-b0ae-6612d550a26f-kube-proxy\") pod \"kube-proxy-dxzqp\" (UID: \"cca033fc-d616-4857-b0ae-6612d550a26f\") " pod="kube-system/kube-proxy-dxzqp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401150    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca033fc-d616-4857-b0ae-6612d550a26f-xtables-lock\") pod \"kube-proxy-dxzqp\" (UID: \"cca033fc-d616-4857-b0ae-6612d550a26f\") " pod="kube-system/kube-proxy-dxzqp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401215    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca033fc-d616-4857-b0ae-6612d550a26f-lib-modules\") pod \"kube-proxy-dxzqp\" (UID: \"cca033fc-d616-4857-b0ae-6612d550a26f\") " pod="kube-system/kube-proxy-dxzqp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401286    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d84cff8-c201-41a1-9bed-b36e2e017aa8-tmp\") pod \"storage-provisioner\" (UID: \"3d84cff8-c201-41a1-9bed-b36e2e017aa8\") " pod="kube-system/storage-provisioner"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401350    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wp4\" (UniqueName: \"kubernetes.io/projected/cca033fc-d616-4857-b0ae-6612d550a26f-kube-api-access-k6wp4\") pod \"kube-proxy-dxzqp\" (UID: \"cca033fc-d616-4857-b0ae-6612d550a26f\") " pod="kube-system/kube-proxy-dxzqp"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: I0120 12:16:58.401425    1133 reconciler.go:159] "Reconciler: start to sync state"
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: E0120 12:16:58.503950    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 12:16:58 test-preload-013266 kubelet[1133]: E0120 12:16:58.504390    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume podName:fd40aff5-bee9-43ec-ad77-93bbb0c9b394 nodeName:}" failed. No retries permitted until 2025-01-20 12:16:59.004087071 +0000 UTC m=+5.775072085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume") pod "coredns-6d4b75cb6d-4hlhp" (UID: "fd40aff5-bee9-43ec-ad77-93bbb0c9b394") : object "kube-system"/"coredns" not registered
	Jan 20 12:16:59 test-preload-013266 kubelet[1133]: E0120 12:16:59.006986    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 12:16:59 test-preload-013266 kubelet[1133]: E0120 12:16:59.007052    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume podName:fd40aff5-bee9-43ec-ad77-93bbb0c9b394 nodeName:}" failed. No retries permitted until 2025-01-20 12:17:00.007038004 +0000 UTC m=+6.778023018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume") pod "coredns-6d4b75cb6d-4hlhp" (UID: "fd40aff5-bee9-43ec-ad77-93bbb0c9b394") : object "kube-system"/"coredns" not registered
	Jan 20 12:17:00 test-preload-013266 kubelet[1133]: E0120 12:17:00.013467    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 12:17:00 test-preload-013266 kubelet[1133]: E0120 12:17:00.013552    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume podName:fd40aff5-bee9-43ec-ad77-93bbb0c9b394 nodeName:}" failed. No retries permitted until 2025-01-20 12:17:02.013536155 +0000 UTC m=+8.784521168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume") pod "coredns-6d4b75cb6d-4hlhp" (UID: "fd40aff5-bee9-43ec-ad77-93bbb0c9b394") : object "kube-system"/"coredns" not registered
	Jan 20 12:17:00 test-preload-013266 kubelet[1133]: E0120 12:17:00.439525    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-4hlhp" podUID=fd40aff5-bee9-43ec-ad77-93bbb0c9b394
	Jan 20 12:17:00 test-preload-013266 kubelet[1133]: I0120 12:17:00.518459    1133 scope.go:110] "RemoveContainer" containerID="bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c"
	Jan 20 12:17:02 test-preload-013266 kubelet[1133]: E0120 12:17:02.032289    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 12:17:02 test-preload-013266 kubelet[1133]: E0120 12:17:02.032878    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume podName:fd40aff5-bee9-43ec-ad77-93bbb0c9b394 nodeName:}" failed. No retries permitted until 2025-01-20 12:17:06.032857767 +0000 UTC m=+12.803842793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd40aff5-bee9-43ec-ad77-93bbb0c9b394-config-volume") pod "coredns-6d4b75cb6d-4hlhp" (UID: "fd40aff5-bee9-43ec-ad77-93bbb0c9b394") : object "kube-system"/"coredns" not registered
	Jan 20 12:17:02 test-preload-013266 kubelet[1133]: E0120 12:17:02.440069    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-4hlhp" podUID=fd40aff5-bee9-43ec-ad77-93bbb0c9b394
	
	
	==> storage-provisioner [698b1b3227754d8c1202f74b15891870af727cbe5934529565cfe4b8f9870094] <==
	I0120 12:17:00.589289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:17:00.599253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:17:00.599416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [bb411d8859f6a3cedbf99324dff22e7c870ac77366c3fae9b96aeae28210745c] <==
	I0120 12:16:59.453027       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0120 12:16:59.464758       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-013266 -n test-preload-013266
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-013266 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-013266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-013266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-013266: (1.146861607s)
--- FAIL: TestPreload (173.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m25.337365014s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-049625] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-049625" primary control-plane node in "kubernetes-upgrade-049625" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:20:17.293872  983149 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:20:17.293989  983149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:20:17.294002  983149 out.go:358] Setting ErrFile to fd 2...
	I0120 12:20:17.294009  983149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:20:17.294178  983149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:20:17.294795  983149 out.go:352] Setting JSON to false
	I0120 12:20:17.295847  983149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18160,"bootTime":1737357457,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:20:17.295969  983149 start.go:139] virtualization: kvm guest
	I0120 12:20:17.298163  983149 out.go:177] * [kubernetes-upgrade-049625] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:20:17.299456  983149 notify.go:220] Checking for updates...
	I0120 12:20:17.299463  983149 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:20:17.300844  983149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:20:17.302278  983149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:20:17.303663  983149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:20:17.304903  983149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:20:17.306245  983149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:20:17.308050  983149 config.go:182] Loaded profile config "NoKubernetes-378897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:20:17.308217  983149 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:20:17.308332  983149 config.go:182] Loaded profile config "running-upgrade-438919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 12:20:17.308475  983149 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:20:17.346963  983149 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:20:17.348316  983149 start.go:297] selected driver: kvm2
	I0120 12:20:17.348358  983149 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:20:17.348379  983149 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:20:17.349139  983149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:20:17.349247  983149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:20:17.365745  983149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:20:17.365794  983149 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:20:17.366008  983149 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:20:17.366041  983149 cni.go:84] Creating CNI manager for ""
	I0120 12:20:17.366075  983149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:20:17.366081  983149 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:20:17.366128  983149 start.go:340] cluster config:
	{Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:20:17.366225  983149 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:20:17.367937  983149 out.go:177] * Starting "kubernetes-upgrade-049625" primary control-plane node in "kubernetes-upgrade-049625" cluster
	I0120 12:20:17.369139  983149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:20:17.369178  983149 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:20:17.369187  983149 cache.go:56] Caching tarball of preloaded images
	I0120 12:20:17.369256  983149 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:20:17.369266  983149 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:20:17.369343  983149 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/config.json ...
	I0120 12:20:17.369383  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/config.json: {Name:mk4edb850fb8348e4b701be7c92f0e58f4cae245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:20:17.369514  983149 start.go:360] acquireMachinesLock for kubernetes-upgrade-049625: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:21:10.172509  983149 start.go:364] duration metric: took 52.802948314s to acquireMachinesLock for "kubernetes-upgrade-049625"
	I0120 12:21:10.172611  983149 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:21:10.172737  983149 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:21:10.174551  983149 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 12:21:10.174769  983149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:10.174852  983149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:10.196064  983149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0120 12:21:10.196500  983149 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:10.197093  983149 main.go:141] libmachine: Using API Version  1
	I0120 12:21:10.197122  983149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:10.197539  983149 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:10.197783  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:21:10.197954  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:10.198128  983149 start.go:159] libmachine.API.Create for "kubernetes-upgrade-049625" (driver="kvm2")
	I0120 12:21:10.198154  983149 client.go:168] LocalClient.Create starting
	I0120 12:21:10.198188  983149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:21:10.198226  983149 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:10.198246  983149 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:10.198320  983149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:21:10.198344  983149 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:10.198360  983149 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:10.198382  983149 main.go:141] libmachine: Running pre-create checks...
	I0120 12:21:10.198396  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .PreCreateCheck
	I0120 12:21:10.198830  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetConfigRaw
	I0120 12:21:10.199319  983149 main.go:141] libmachine: Creating machine...
	I0120 12:21:10.199345  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .Create
	I0120 12:21:10.199503  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) creating KVM machine...
	I0120 12:21:10.199517  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) creating network...
	I0120 12:21:10.200977  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found existing default KVM network
	I0120 12:21:10.203092  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.202879  983827 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:dd:5e} reservation:<nil>}
	I0120 12:21:10.204651  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.204516  983827 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:b2:5f} reservation:<nil>}
	I0120 12:21:10.205910  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.205816  983827 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:1b:37} reservation:<nil>}
	I0120 12:21:10.207888  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.207791  983827 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5f00}
	I0120 12:21:10.207931  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | created network xml: 
	I0120 12:21:10.207949  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | <network>
	I0120 12:21:10.207961  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   <name>mk-kubernetes-upgrade-049625</name>
	I0120 12:21:10.207970  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   <dns enable='no'/>
	I0120 12:21:10.207980  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   
	I0120 12:21:10.208301  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 12:21:10.208329  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |     <dhcp>
	I0120 12:21:10.208343  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 12:21:10.208366  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |     </dhcp>
	I0120 12:21:10.208374  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   </ip>
	I0120 12:21:10.208380  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG |   
	I0120 12:21:10.208392  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | </network>
	I0120 12:21:10.208398  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | 
	I0120 12:21:10.213348  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | trying to create private KVM network mk-kubernetes-upgrade-049625 192.168.72.0/24...
	I0120 12:21:10.290409  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | private KVM network mk-kubernetes-upgrade-049625 192.168.72.0/24 created
	I0120 12:21:10.290443  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.290359  983827 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:10.290459  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625 ...
	I0120 12:21:10.290475  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:21:10.290593  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:21:10.668416  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.668272  983827 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa...
	I0120 12:21:10.766756  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.766629  983827 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/kubernetes-upgrade-049625.rawdisk...
	I0120 12:21:10.766793  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | Writing magic tar header
	I0120 12:21:10.766813  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | Writing SSH key tar header
	I0120 12:21:10.766825  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:10.766768  983827 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625 ...
	I0120 12:21:10.766892  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625
	I0120 12:21:10.766934  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:21:10.766952  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625 (perms=drwx------)
	I0120 12:21:10.766963  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:10.766984  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:21:10.766999  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:21:10.767019  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:21:10.767032  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home/jenkins
	I0120 12:21:10.767048  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | checking permissions on dir: /home
	I0120 12:21:10.767059  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | skipping /home - not owner
	I0120 12:21:10.767075  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:21:10.767089  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:21:10.767104  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:21:10.767132  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:21:10.767144  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) creating domain...
	I0120 12:21:10.768462  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) define libvirt domain using xml: 
	I0120 12:21:10.768476  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) <domain type='kvm'>
	I0120 12:21:10.768487  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <name>kubernetes-upgrade-049625</name>
	I0120 12:21:10.768501  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <memory unit='MiB'>2200</memory>
	I0120 12:21:10.768511  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <vcpu>2</vcpu>
	I0120 12:21:10.768535  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <features>
	I0120 12:21:10.768548  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <acpi/>
	I0120 12:21:10.768556  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <apic/>
	I0120 12:21:10.768569  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <pae/>
	I0120 12:21:10.768580  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     
	I0120 12:21:10.768592  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   </features>
	I0120 12:21:10.768604  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <cpu mode='host-passthrough'>
	I0120 12:21:10.768613  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   
	I0120 12:21:10.768629  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   </cpu>
	I0120 12:21:10.768642  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <os>
	I0120 12:21:10.768653  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <type>hvm</type>
	I0120 12:21:10.768666  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <boot dev='cdrom'/>
	I0120 12:21:10.768684  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <boot dev='hd'/>
	I0120 12:21:10.768697  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <bootmenu enable='no'/>
	I0120 12:21:10.768712  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   </os>
	I0120 12:21:10.768725  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   <devices>
	I0120 12:21:10.768734  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <disk type='file' device='cdrom'>
	I0120 12:21:10.768754  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/boot2docker.iso'/>
	I0120 12:21:10.768766  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <target dev='hdc' bus='scsi'/>
	I0120 12:21:10.768776  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <readonly/>
	I0120 12:21:10.768791  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </disk>
	I0120 12:21:10.768805  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <disk type='file' device='disk'>
	I0120 12:21:10.768822  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:21:10.768841  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/kubernetes-upgrade-049625.rawdisk'/>
	I0120 12:21:10.768854  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <target dev='hda' bus='virtio'/>
	I0120 12:21:10.768929  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </disk>
	I0120 12:21:10.768974  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <interface type='network'>
	I0120 12:21:10.768992  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <source network='mk-kubernetes-upgrade-049625'/>
	I0120 12:21:10.769007  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <model type='virtio'/>
	I0120 12:21:10.769017  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </interface>
	I0120 12:21:10.769028  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <interface type='network'>
	I0120 12:21:10.769042  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <source network='default'/>
	I0120 12:21:10.769053  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <model type='virtio'/>
	I0120 12:21:10.769074  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </interface>
	I0120 12:21:10.769105  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <serial type='pty'>
	I0120 12:21:10.769132  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <target port='0'/>
	I0120 12:21:10.769140  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </serial>
	I0120 12:21:10.769149  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <console type='pty'>
	I0120 12:21:10.769161  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <target type='serial' port='0'/>
	I0120 12:21:10.769169  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </console>
	I0120 12:21:10.769175  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     <rng model='virtio'>
	I0120 12:21:10.769211  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)       <backend model='random'>/dev/random</backend>
	I0120 12:21:10.769232  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     </rng>
	I0120 12:21:10.769243  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     
	I0120 12:21:10.769255  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)     
	I0120 12:21:10.769266  983149 main.go:141] libmachine: (kubernetes-upgrade-049625)   </devices>
	I0120 12:21:10.769277  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) </domain>
	I0120 12:21:10.769303  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) 
	I0120 12:21:10.775895  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:8c:4d:4b in network default
	I0120 12:21:10.776482  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:10.776505  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) starting domain...
	I0120 12:21:10.776521  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) ensuring networks are active...
	I0120 12:21:10.777217  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Ensuring network default is active
	I0120 12:21:10.777608  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Ensuring network mk-kubernetes-upgrade-049625 is active
	I0120 12:21:10.778100  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) getting domain XML...
	I0120 12:21:10.778832  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) creating domain...
	I0120 12:21:12.306294  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) waiting for IP...
	I0120 12:21:12.307403  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.308184  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.308212  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:12.308083  983827 retry.go:31] will retry after 269.692719ms: waiting for domain to come up
	I0120 12:21:12.579744  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.580333  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.580355  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:12.580263  983827 retry.go:31] will retry after 313.667087ms: waiting for domain to come up
	I0120 12:21:12.896012  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.896721  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:12.896759  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:12.896618  983827 retry.go:31] will retry after 446.373398ms: waiting for domain to come up
	I0120 12:21:13.344318  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:13.345004  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:13.345027  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:13.344961  983827 retry.go:31] will retry after 468.077991ms: waiting for domain to come up
	I0120 12:21:13.814507  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:13.815112  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:13.815146  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:13.815053  983827 retry.go:31] will retry after 594.653296ms: waiting for domain to come up
	I0120 12:21:14.411973  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:14.412630  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:14.412665  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:14.412583  983827 retry.go:31] will retry after 855.509839ms: waiting for domain to come up
	I0120 12:21:15.270030  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:15.270565  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:15.270637  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:15.270552  983827 retry.go:31] will retry after 741.577197ms: waiting for domain to come up
	I0120 12:21:16.013580  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:16.014103  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:16.014156  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:16.014115  983827 retry.go:31] will retry after 1.416825662s: waiting for domain to come up
	I0120 12:21:17.432457  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:17.432951  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:17.432983  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:17.432911  983827 retry.go:31] will retry after 1.635947016s: waiting for domain to come up
	I0120 12:21:19.070654  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:19.071111  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:19.071138  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:19.071072  983827 retry.go:31] will retry after 2.021231433s: waiting for domain to come up
	I0120 12:21:21.093730  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:21.094293  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:21.094326  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:21.094250  983827 retry.go:31] will retry after 2.760139389s: waiting for domain to come up
	I0120 12:21:23.855937  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:23.856406  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:23.856437  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:23.856384  983827 retry.go:31] will retry after 2.268366654s: waiting for domain to come up
	I0120 12:21:26.126474  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:26.126918  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:26.126954  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:26.126882  983827 retry.go:31] will retry after 4.252618341s: waiting for domain to come up
	I0120 12:21:30.380935  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:30.381369  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find current IP address of domain kubernetes-upgrade-049625 in network mk-kubernetes-upgrade-049625
	I0120 12:21:30.381398  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | I0120 12:21:30.381334  983827 retry.go:31] will retry after 3.648460143s: waiting for domain to come up
	I0120 12:21:34.033578  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.034119  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) found domain IP: 192.168.72.97
	I0120 12:21:34.034136  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) reserving static IP address...
	I0120 12:21:34.034153  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has current primary IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.034736  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-049625", mac: "52:54:00:6b:c9:93", ip: "192.168.72.97"} in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.109046  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) reserved static IP address 192.168.72.97 for domain kubernetes-upgrade-049625
	I0120 12:21:34.109077  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | Getting to WaitForSSH function...
	I0120 12:21:34.109085  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) waiting for SSH...
	I0120 12:21:34.112137  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.112609  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.112644  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.112847  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | Using SSH client type: external
	I0120 12:21:34.112880  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa (-rw-------)
	I0120 12:21:34.112922  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:21:34.112946  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | About to run SSH command:
	I0120 12:21:34.112982  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | exit 0
	I0120 12:21:34.242452  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | SSH cmd err, output: <nil>: 
	I0120 12:21:34.242781  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) KVM machine creation complete
	I0120 12:21:34.243111  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetConfigRaw
	I0120 12:21:34.243809  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:34.244030  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:34.244206  983149 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:21:34.244223  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetState
	I0120 12:21:34.245706  983149 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:21:34.245725  983149 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:21:34.245732  983149 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:21:34.245741  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:34.248750  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.249206  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.249250  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.249421  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:34.249600  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.249784  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.249956  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:34.250154  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:34.250402  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:34.250417  983149 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:21:34.357382  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:21:34.357402  983149 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:21:34.357412  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:34.359951  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.360305  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.360336  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.360472  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:34.360667  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.360837  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.360979  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:34.361164  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:34.361323  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:34.361334  983149 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:21:34.471011  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:21:34.471134  983149 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:21:34.471147  983149 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:21:34.471159  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:21:34.471436  983149 buildroot.go:166] provisioning hostname "kubernetes-upgrade-049625"
	I0120 12:21:34.471466  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:21:34.471655  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:34.474991  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.475537  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.475573  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.475747  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:34.475967  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.476197  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.476377  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:34.476584  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:34.476809  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:34.476829  983149 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-049625 && echo "kubernetes-upgrade-049625" | sudo tee /etc/hostname
	I0120 12:21:34.604574  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-049625
	
	I0120 12:21:34.604618  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:34.607538  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.607936  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.607973  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.608178  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:34.608385  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.608543  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:34.608683  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:34.608848  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:34.609034  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:34.609057  983149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-049625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-049625/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-049625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:21:34.731012  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:21:34.731046  983149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:21:34.731085  983149 buildroot.go:174] setting up certificates
	I0120 12:21:34.731112  983149 provision.go:84] configureAuth start
	I0120 12:21:34.731130  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:21:34.731428  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:21:34.734091  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.734565  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.734594  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.734759  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:34.737185  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.737518  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:34.737552  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:34.737677  983149 provision.go:143] copyHostCerts
	I0120 12:21:34.737740  983149 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:21:34.737766  983149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:21:34.737832  983149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:21:34.737951  983149 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:21:34.737980  983149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:21:34.738023  983149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:21:34.738108  983149 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:21:34.738120  983149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:21:34.738148  983149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:21:34.738215  983149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-049625 san=[127.0.0.1 192.168.72.97 kubernetes-upgrade-049625 localhost minikube]
	I0120 12:21:35.017266  983149 provision.go:177] copyRemoteCerts
	I0120 12:21:35.017327  983149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:21:35.017354  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.020099  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.020545  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.020571  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.020745  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.020952  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.021101  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.021207  983149 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:21:35.108473  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:21:35.133993  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 12:21:35.155847  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:21:35.177725  983149 provision.go:87] duration metric: took 446.593553ms to configureAuth
	I0120 12:21:35.177746  983149 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:21:35.177920  983149 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:21:35.178001  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.180664  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.181043  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.181075  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.181275  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.181481  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.181670  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.181842  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.182043  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:35.182224  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:35.182244  983149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:21:35.400068  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:21:35.400109  983149 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:21:35.400119  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetURL
	I0120 12:21:35.401561  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | using libvirt version 6000000
	I0120 12:21:35.404178  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.404572  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.404607  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.404772  983149 main.go:141] libmachine: Docker is up and running!
	I0120 12:21:35.404789  983149 main.go:141] libmachine: Reticulating splines...
	I0120 12:21:35.404796  983149 client.go:171] duration metric: took 25.20663493s to LocalClient.Create
	I0120 12:21:35.404826  983149 start.go:167] duration metric: took 25.20669918s to libmachine.API.Create "kubernetes-upgrade-049625"
	I0120 12:21:35.404842  983149 start.go:293] postStartSetup for "kubernetes-upgrade-049625" (driver="kvm2")
	I0120 12:21:35.404857  983149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:21:35.404884  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:35.405173  983149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:21:35.405218  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.407341  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.407690  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.407730  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.407927  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.408156  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.408313  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.408453  983149 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:21:35.492117  983149 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:21:35.497013  983149 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:21:35.497038  983149 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:21:35.497117  983149 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:21:35.497215  983149 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:21:35.497327  983149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:21:35.508600  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:35.531065  983149 start.go:296] duration metric: took 126.20815ms for postStartSetup
	I0120 12:21:35.531126  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetConfigRaw
	I0120 12:21:35.531732  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:21:35.534715  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.535086  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.535124  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.535342  983149 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/config.json ...
	I0120 12:21:35.535553  983149 start.go:128] duration metric: took 25.36280269s to createHost
	I0120 12:21:35.535581  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.537820  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.538180  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.538219  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.538299  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.538497  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.538718  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.538888  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.539060  983149 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:35.539248  983149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:21:35.539259  983149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:21:35.646505  983149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375695.605487071
	
	I0120 12:21:35.646548  983149 fix.go:216] guest clock: 1737375695.605487071
	I0120 12:21:35.646559  983149 fix.go:229] Guest: 2025-01-20 12:21:35.605487071 +0000 UTC Remote: 2025-01-20 12:21:35.535565519 +0000 UTC m=+78.280215994 (delta=69.921552ms)
	I0120 12:21:35.646578  983149 fix.go:200] guest clock delta is within tolerance: 69.921552ms
	I0120 12:21:35.646583  983149 start.go:83] releasing machines lock for "kubernetes-upgrade-049625", held for 25.474014558s
	I0120 12:21:35.646606  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:35.646933  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:21:35.649588  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.649950  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.649989  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.650128  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:35.650611  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:35.650819  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:21:35.650910  983149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:21:35.650958  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.651101  983149 ssh_runner.go:195] Run: cat /version.json
	I0120 12:21:35.651133  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:21:35.653784  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.653998  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.654127  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.654160  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.654287  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.654444  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:35.654451  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.654472  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:35.654640  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.654702  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:21:35.654782  983149 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:21:35.654872  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:21:35.655035  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:21:35.655202  983149 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:21:35.773509  983149 ssh_runner.go:195] Run: systemctl --version
	I0120 12:21:35.778840  983149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:21:35.941636  983149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:21:35.948282  983149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:21:35.948364  983149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:21:35.963608  983149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:21:35.963634  983149 start.go:495] detecting cgroup driver to use...
	I0120 12:21:35.963702  983149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:21:35.980482  983149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:21:35.996470  983149 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:21:35.996508  983149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:21:36.011197  983149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:21:36.025095  983149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:21:36.159905  983149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:21:36.341547  983149 docker.go:233] disabling docker service ...
	I0120 12:21:36.341611  983149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:21:36.356896  983149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:36.369084  983149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:36.500276  983149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:36.640939  983149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:36.665807  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:36.684010  983149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:21:36.684083  983149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:36.694490  983149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:36.694583  983149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:36.704865  983149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:36.714979  983149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:36.725289  983149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:36.735323  983149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:36.744283  983149 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:21:36.744332  983149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:21:36.756278  983149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:36.765144  983149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:36.880009  983149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:36.968681  983149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:36.968771  983149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:36.973156  983149 start.go:563] Will wait 60s for crictl version
	I0120 12:21:36.973220  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:36.976641  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:37.010526  983149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:21:37.010596  983149 ssh_runner.go:195] Run: crio --version
	I0120 12:21:37.037667  983149 ssh_runner.go:195] Run: crio --version
	I0120 12:21:37.066475  983149 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:21:37.067793  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:21:37.070700  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:37.071119  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:21:37.071151  983149 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:21:37.071394  983149 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:37.075178  983149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:21:37.087040  983149 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-049625 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:21:37.087144  983149 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:21:37.087185  983149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:37.118425  983149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:21:37.118502  983149 ssh_runner.go:195] Run: which lz4
	I0120 12:21:37.122011  983149 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:21:37.126256  983149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:21:37.126280  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:21:38.817006  983149 crio.go:462] duration metric: took 1.69501863s to copy over tarball
	I0120 12:21:38.817104  983149 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:21:41.363426  983149 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.546288879s)
	I0120 12:21:41.363463  983149 crio.go:469] duration metric: took 2.546425092s to extract the tarball
	I0120 12:21:41.363475  983149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:21:41.408718  983149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:41.457123  983149 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:21:41.457159  983149 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:21:41.457269  983149 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.457291  983149 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:41.457316  983149 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:41.457267  983149 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:41.457257  983149 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:41.457266  983149 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:21:41.457285  983149 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:41.457287  983149 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:41.458801  983149 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:41.458861  983149 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:41.458908  983149 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.458920  983149 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:41.459146  983149 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:41.459229  983149 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:41.459494  983149 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:21:41.459495  983149 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:41.659718  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:21:41.676872  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:41.678018  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:41.696545  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:41.696933  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:41.702950  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.722641  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:41.761301  983149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:21:41.761414  983149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:21:41.761531  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.806025  983149 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:21:41.806091  983149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:41.806028  983149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:21:41.806184  983149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:41.806221  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.806231  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.870490  983149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:21:41.870499  983149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:21:41.870558  983149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:41.870563  983149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:41.870591  983149 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:21:41.870604  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.870605  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.870620  983149 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.870654  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.876835  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:21:41.876916  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:41.876920  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:41.876942  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.877091  983149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:21:41.877117  983149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:41.877157  983149 ssh_runner.go:195] Run: which crictl
	I0120 12:21:41.882042  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:41.882110  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:41.994734  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:21:41.994808  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:42.002902  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:21:42.002916  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:42.002937  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:42.035298  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:42.035562  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:42.131866  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:21:42.157885  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:21:42.178069  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:21:42.178163  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:42.178187  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:21:42.203829  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:21:42.203885  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:21:42.222316  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:21:42.326104  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:21:42.351933  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:21:42.351975  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:21:42.352094  983149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:21:42.352175  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:21:42.352244  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:21:42.396950  983149 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:21:42.683900  983149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:42.820656  983149 cache_images.go:92] duration metric: took 1.363474371s to LoadCachedImages
	W0120 12:21:42.820770  983149 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0120 12:21:42.820789  983149 kubeadm.go:934] updating node { 192.168.72.97 8443 v1.20.0 crio true true} ...
	I0120 12:21:42.820927  983149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-049625 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:21:42.821012  983149 ssh_runner.go:195] Run: crio config
	I0120 12:21:42.874940  983149 cni.go:84] Creating CNI manager for ""
	I0120 12:21:42.874966  983149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:42.874979  983149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:21:42.875005  983149 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-049625 NodeName:kubernetes-upgrade-049625 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:21:42.875182  983149 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-049625"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:21:42.875265  983149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:21:42.885150  983149 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:21:42.885234  983149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:21:42.894174  983149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0120 12:21:42.909882  983149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:21:42.925406  983149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:21:42.943064  983149 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I0120 12:21:42.949942  983149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:21:42.966614  983149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:43.086017  983149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:21:43.103652  983149 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625 for IP: 192.168.72.97
	I0120 12:21:43.103683  983149 certs.go:194] generating shared ca certs ...
	I0120 12:21:43.103706  983149 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.103922  983149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:21:43.103990  983149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:21:43.104007  983149 certs.go:256] generating profile certs ...
	I0120 12:21:43.104090  983149 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.key
	I0120 12:21:43.104130  983149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.crt with IP's: []
	I0120 12:21:43.347189  983149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.crt ...
	I0120 12:21:43.347229  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.crt: {Name:mkd838c38d85e08a7c68aac0ea4e044907231558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.347448  983149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.key ...
	I0120 12:21:43.347473  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.key: {Name:mk5fe0086e7398777eeee75b47958eb88dedfd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.347592  983149 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key.0c1e7817
	I0120 12:21:43.347616  983149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt.0c1e7817 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.97]
	I0120 12:21:43.629626  983149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt.0c1e7817 ...
	I0120 12:21:43.629652  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt.0c1e7817: {Name:mk041ff5339bf59968d258db66840acc871b23d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.629786  983149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key.0c1e7817 ...
	I0120 12:21:43.629798  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key.0c1e7817: {Name:mk63b6f66ad39fe3172e14ea2c09fa1993c7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.629869  983149 certs.go:381] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt.0c1e7817 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt
	I0120 12:21:43.629937  983149 certs.go:385] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key.0c1e7817 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key
	I0120 12:21:43.629986  983149 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key
	I0120 12:21:43.630000  983149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.crt with IP's: []
	I0120 12:21:43.802844  983149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.crt ...
	I0120 12:21:43.802876  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.crt: {Name:mk6590cd8d22518c7f1ba36c9f3850fbf1fb443d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.803033  983149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key ...
	I0120 12:21:43.803045  983149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key: {Name:mk6ebdba9d96186b50b289f52f865090bff9f138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:43.803221  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:21:43.803258  983149 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:21:43.803269  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:21:43.803295  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:21:43.803320  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:21:43.803343  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:21:43.803379  983149 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:43.804047  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:21:43.832124  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:21:43.857921  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:21:43.881921  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:21:43.904394  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 12:21:43.944524  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:21:44.030368  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:21:44.053759  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:21:44.085232  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:21:44.113273  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:21:44.139488  983149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:21:44.163217  983149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:21:44.178837  983149 ssh_runner.go:195] Run: openssl version
	I0120 12:21:44.184264  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:21:44.194510  983149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:21:44.198436  983149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:21:44.198484  983149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:21:44.203953  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:21:44.213716  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:21:44.223778  983149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:44.227826  983149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:44.227885  983149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:44.233383  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:21:44.244734  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:21:44.254375  983149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:21:44.258426  983149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:21:44.258468  983149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:21:44.264080  983149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:21:44.276347  983149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:21:44.281359  983149 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:21:44.281427  983149 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-049625 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:44.281528  983149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:21:44.281572  983149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:21:44.316430  983149 cri.go:89] found id: ""
	I0120 12:21:44.316497  983149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:21:44.326255  983149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:21:44.335674  983149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:21:44.346267  983149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:21:44.346286  983149 kubeadm.go:157] found existing configuration files:
	
	I0120 12:21:44.346328  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:21:44.356451  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:21:44.356537  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:21:44.366757  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:21:44.376506  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:21:44.376550  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:21:44.386339  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:21:44.396010  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:21:44.396069  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:21:44.404461  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:21:44.413042  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:21:44.413098  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:21:44.423510  983149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:21:44.537285  983149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:21:44.537391  983149 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:21:44.682348  983149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:21:44.682485  983149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:21:44.682617  983149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:21:44.875420  983149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:21:45.051265  983149 out.go:235]   - Generating certificates and keys ...
	I0120 12:21:45.051400  983149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:21:45.051496  983149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:21:45.166316  983149 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:21:45.256857  983149 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:21:45.363680  983149 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:21:45.527848  983149 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:21:45.746499  983149 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:21:45.746790  983149 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	I0120 12:21:45.985040  983149 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:21:45.985273  983149 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	I0120 12:21:46.980764  983149 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:21:47.186121  983149 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:21:47.509341  983149 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:21:47.509784  983149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:21:48.089692  983149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:21:48.226724  983149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:21:48.509977  983149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:21:48.995161  983149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:21:49.014215  983149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:21:49.015392  983149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:21:49.015464  983149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:21:49.143399  983149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:21:49.145828  983149 out.go:235]   - Booting up control plane ...
	I0120 12:21:49.145964  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:21:49.153153  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:21:49.153237  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:21:49.153897  983149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:21:49.160318  983149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:22:29.127761  983149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:22:29.128115  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:22:29.128428  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:22:34.127934  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:22:34.128134  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:22:44.127311  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:22:44.127612  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:23:04.127758  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:23:04.128039  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:23:44.126716  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:23:44.127290  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:23:44.127323  983149 kubeadm.go:310] 
	I0120 12:23:44.127419  983149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:23:44.127513  983149 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:23:44.127531  983149 kubeadm.go:310] 
	I0120 12:23:44.127614  983149 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:23:44.127701  983149 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:23:44.127948  983149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:23:44.127970  983149 kubeadm.go:310] 
	I0120 12:23:44.128227  983149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:23:44.128308  983149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:23:44.128385  983149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:23:44.128399  983149 kubeadm.go:310] 
	I0120 12:23:44.128654  983149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:23:44.128859  983149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:23:44.128872  983149 kubeadm.go:310] 
	I0120 12:23:44.129172  983149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:23:44.129434  983149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:23:44.129634  983149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:23:44.129817  983149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:23:44.129914  983149 kubeadm.go:310] 
	I0120 12:23:44.130183  983149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:23:44.130400  983149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:23:44.130845  983149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:23:44.130994  983149 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-049625 localhost] and IPs [192.168.72.97 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:23:44.131046  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:23:45.681039  983149 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.549962259s)
	I0120 12:23:45.681133  983149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:23:45.715744  983149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:23:45.735081  983149 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:23:45.735104  983149 kubeadm.go:157] found existing configuration files:
	
	I0120 12:23:45.735166  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:23:45.749298  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:23:45.749372  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:23:45.764403  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:23:45.776446  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:23:45.776517  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:23:45.789365  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:23:45.801301  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:23:45.801372  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:23:45.811602  983149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:23:45.821046  983149 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:23:45.821099  983149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:23:45.829944  983149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:23:46.072241  983149 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:25:42.002221  983149 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:25:42.002319  983149 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:25:42.004252  983149 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:25:42.004300  983149 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:25:42.004391  983149 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:25:42.004492  983149 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:25:42.004599  983149 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:25:42.004682  983149 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:25:42.006309  983149 out.go:235]   - Generating certificates and keys ...
	I0120 12:25:42.006395  983149 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:25:42.006472  983149 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:25:42.006590  983149 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:25:42.006682  983149 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:25:42.006770  983149 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:25:42.006815  983149 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:25:42.006864  983149 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:25:42.006912  983149 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:25:42.006979  983149 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:25:42.007070  983149 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:25:42.007126  983149 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:25:42.007205  983149 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:25:42.007270  983149 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:25:42.007328  983149 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:25:42.007389  983149 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:25:42.007434  983149 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:25:42.007530  983149 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:25:42.007607  983149 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:25:42.007641  983149 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:25:42.007701  983149 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:25:42.009157  983149 out.go:235]   - Booting up control plane ...
	I0120 12:25:42.009261  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:25:42.009374  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:25:42.009464  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:25:42.009553  983149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:25:42.009785  983149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:25:42.009840  983149 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:25:42.009902  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:25:42.010076  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:25:42.010146  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:25:42.010304  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:25:42.010380  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:25:42.010667  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:25:42.010785  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:25:42.011066  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:25:42.011160  983149 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:25:42.011396  983149 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:25:42.011410  983149 kubeadm.go:310] 
	I0120 12:25:42.011446  983149 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:25:42.011479  983149 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:25:42.011489  983149 kubeadm.go:310] 
	I0120 12:25:42.011522  983149 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:25:42.011551  983149 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:25:42.011642  983149 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:25:42.011652  983149 kubeadm.go:310] 
	I0120 12:25:42.011778  983149 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:25:42.011814  983149 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:25:42.011846  983149 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:25:42.011854  983149 kubeadm.go:310] 
	I0120 12:25:42.011949  983149 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:25:42.012051  983149 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:25:42.012064  983149 kubeadm.go:310] 
	I0120 12:25:42.012180  983149 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:25:42.012306  983149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:25:42.012424  983149 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:25:42.012508  983149 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:25:42.012551  983149 kubeadm.go:310] 
	I0120 12:25:42.012583  983149 kubeadm.go:394] duration metric: took 3m57.731160219s to StartCluster
	I0120 12:25:42.012630  983149 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:25:42.012690  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:25:42.058130  983149 cri.go:89] found id: ""
	I0120 12:25:42.058160  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.058172  983149 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:25:42.058181  983149 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:25:42.058241  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:25:42.091384  983149 cri.go:89] found id: ""
	I0120 12:25:42.091412  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.091422  983149 logs.go:284] No container was found matching "etcd"
	I0120 12:25:42.091430  983149 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:25:42.091494  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:25:42.140214  983149 cri.go:89] found id: ""
	I0120 12:25:42.140250  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.140260  983149 logs.go:284] No container was found matching "coredns"
	I0120 12:25:42.140268  983149 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:25:42.140337  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:25:42.172648  983149 cri.go:89] found id: ""
	I0120 12:25:42.172677  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.172685  983149 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:25:42.172692  983149 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:25:42.172756  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:25:42.204381  983149 cri.go:89] found id: ""
	I0120 12:25:42.204413  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.204432  983149 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:25:42.204442  983149 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:25:42.204513  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:25:42.242361  983149 cri.go:89] found id: ""
	I0120 12:25:42.242389  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.242400  983149 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:25:42.242409  983149 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:25:42.242460  983149 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:25:42.272482  983149 cri.go:89] found id: ""
	I0120 12:25:42.272505  983149 logs.go:282] 0 containers: []
	W0120 12:25:42.272512  983149 logs.go:284] No container was found matching "kindnet"
	I0120 12:25:42.272523  983149 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:25:42.272535  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:25:42.371741  983149 logs.go:123] Gathering logs for container status ...
	I0120 12:25:42.371772  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:25:42.407053  983149 logs.go:123] Gathering logs for kubelet ...
	I0120 12:25:42.407091  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:25:42.455777  983149 logs.go:123] Gathering logs for dmesg ...
	I0120 12:25:42.455802  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:25:42.467877  983149 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:25:42.467901  983149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:25:42.573575  983149 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0120 12:25:42.573604  983149 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:25:42.573667  983149 out.go:270] * 
	* 
	W0120 12:25:42.573737  983149 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:25:42.573760  983149 out.go:270] * 
	* 
	W0120 12:25:42.574621  983149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:25:42.577750  983149 out.go:201] 
	W0120 12:25:42.578885  983149 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:25:42.578918  983149 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:25:42.578935  983149 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:25:42.580420  983149 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-049625
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-049625: (1.382521644s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-049625 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-049625 status --format={{.Host}}: exit status 7 (65.080479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.034077589s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-049625 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.567607ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-049625] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-049625
	    minikube start -p kubernetes-upgrade-049625 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0496252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-049625 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-049625 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (12.78117823s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-20 12:26:37.07246873 +0000 UTC m=+3888.318565163
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-049625 -n kubernetes-upgrade-049625
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-049625 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-049625 logs -n 25: (1.234017971s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-816069 sudo crio            | cilium-816069             | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-816069                      | cilium-816069             | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	| start   | -p stopped-upgrade-038534             | minikube                  | jenkins | v1.26.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-378897                | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	| start   | -p NoKubernetes-378897                | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:23 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-438919             | running-upgrade-438919    | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	| start   | -p cert-expiration-673364             | cert-expiration-673364    | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:24 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-378897 sudo           | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-378897                | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
	| start   | -p force-systemd-flag-595350          | force-systemd-flag-595350 | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:24 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-038534 stop           | minikube                  | jenkins | v1.26.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
	| start   | -p stopped-upgrade-038534             | stopped-upgrade-038534    | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:24 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-595350 ssh cat     | force-systemd-flag-595350 | jenkins | v1.35.0 | 20 Jan 25 12:24 UTC | 20 Jan 25 12:24 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-595350          | force-systemd-flag-595350 | jenkins | v1.35.0 | 20 Jan 25 12:24 UTC | 20 Jan 25 12:24 UTC |
	| start   | -p cert-options-600668                | cert-options-600668       | jenkins | v1.35.0 | 20 Jan 25 12:24 UTC | 20 Jan 25 12:25 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-038534             | stopped-upgrade-038534    | jenkins | v1.35.0 | 20 Jan 25 12:24 UTC | 20 Jan 25 12:24 UTC |
	| start   | -p old-k8s-version-134433             | old-k8s-version-134433    | jenkins | v1.35.0 | 20 Jan 25 12:24 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | cert-options-600668 ssh               | cert-options-600668       | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:25 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-600668 -- sudo        | cert-options-600668       | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:25 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-600668                | cert-options-600668       | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:25 UTC |
	| start   | -p no-preload-496524                  | no-preload-496524         | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-049625          | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:25 UTC |
	| start   | -p kubernetes-upgrade-049625          | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625          | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625          | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:26:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:26:24.342824  990580 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:26:24.342951  990580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:26:24.342958  990580 out.go:358] Setting ErrFile to fd 2...
	I0120 12:26:24.342965  990580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:26:24.343275  990580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:26:24.343998  990580 out.go:352] Setting JSON to false
	I0120 12:26:24.345615  990580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18527,"bootTime":1737357457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:26:24.345773  990580 start.go:139] virtualization: kvm guest
	I0120 12:26:24.347701  990580 out.go:177] * [kubernetes-upgrade-049625] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:26:24.349331  990580 notify.go:220] Checking for updates...
	I0120 12:26:24.349347  990580 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:26:24.351121  990580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:26:24.352981  990580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:26:24.354330  990580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:26:24.355764  990580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:26:24.357386  990580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:26:24.111645  989833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:26:24.111663  989833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:26:24.111678  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:26:24.115007  989833 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:26:24.115034  989833 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:25:38 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:26:24.115063  989833 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:26:24.115252  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:26:24.117042  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:26:24.119338  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:26:24.119532  989833 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:26:24.127065  989833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0120 12:26:24.127630  989833 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:26:24.128322  989833 main.go:141] libmachine: Using API Version  1
	I0120 12:26:24.128342  989833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:26:24.128776  989833 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:26:24.128987  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:26:24.132111  989833 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:26:24.132395  989833 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:26:24.132412  989833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:26:24.132430  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:26:24.135582  989833 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:26:24.135984  989833 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:25:38 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:26:24.136006  989833 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:26:24.136283  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:26:24.136474  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:26:24.136659  989833 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:26:24.136815  989833 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:26:24.380675  989833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:26:24.380942  989833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:26:24.359055  990580 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:26:24.359644  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:26:24.359737  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:26:24.375493  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0120 12:26:24.376049  990580 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:26:24.376736  990580 main.go:141] libmachine: Using API Version  1
	I0120 12:26:24.376788  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:26:24.377284  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:26:24.377543  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:24.377880  990580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:26:24.378397  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:26:24.378504  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:26:24.394299  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0120 12:26:24.394850  990580 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:26:24.395425  990580 main.go:141] libmachine: Using API Version  1
	I0120 12:26:24.395454  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:26:24.395839  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:26:24.396069  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:24.434135  990580 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:26:24.435475  990580 start.go:297] selected driver: kvm2
	I0120 12:26:24.435493  990580 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-up
grade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:24.435624  990580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:26:24.437387  990580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:26:24.437575  990580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:26:24.457538  990580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:26:24.458087  990580 cni.go:84] Creating CNI manager for ""
	I0120 12:26:24.458141  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:26:24.458200  990580 start.go:340] cluster config:
	{Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:24.458339  990580 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:26:24.459918  990580 out.go:177] * Starting "kubernetes-upgrade-049625" primary control-plane node in "kubernetes-upgrade-049625" cluster
	I0120 12:26:24.461166  990580 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:26:24.461214  990580 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:26:24.461231  990580 cache.go:56] Caching tarball of preloaded images
	I0120 12:26:24.461344  990580 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:26:24.461360  990580 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:26:24.461481  990580 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/config.json ...
	I0120 12:26:24.461740  990580 start.go:360] acquireMachinesLock for kubernetes-upgrade-049625: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:26:24.461817  990580 start.go:364] duration metric: took 47.84µs to acquireMachinesLock for "kubernetes-upgrade-049625"
	I0120 12:26:24.461844  990580 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:26:24.461854  990580 fix.go:54] fixHost starting: 
	I0120 12:26:24.462278  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:26:24.462323  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:26:24.476853  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0120 12:26:24.477367  990580 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:26:24.478007  990580 main.go:141] libmachine: Using API Version  1
	I0120 12:26:24.478075  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:26:24.478504  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:26:24.478753  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:24.478936  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetState
	I0120 12:26:24.480678  990580 fix.go:112] recreateIfNeeded on kubernetes-upgrade-049625: state=Running err=<nil>
	W0120 12:26:24.480717  990580 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:26:24.482256  990580 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-049625" VM ...
	I0120 12:26:24.595599  989833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:26:24.707605  989833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:26:25.294966  989833 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0120 12:26:25.296441  989833 node_ready.go:35] waiting up to 6m0s for node "no-preload-496524" to be "Ready" ...
	I0120 12:26:25.307725  989833 node_ready.go:49] node "no-preload-496524" has status "Ready":"True"
	I0120 12:26:25.307754  989833 node_ready.go:38] duration metric: took 11.27205ms for node "no-preload-496524" to be "Ready" ...
	I0120 12:26:25.307768  989833 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:26:25.319077  989833 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:25.800689  989833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-496524" context rescaled to 1 replicas
	I0120 12:26:25.913411  989833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.317756763s)
	I0120 12:26:25.913469  989833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.205802958s)
	I0120 12:26:25.913487  989833 main.go:141] libmachine: Making call to close driver server
	I0120 12:26:25.913504  989833 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:26:25.913525  989833 main.go:141] libmachine: Making call to close driver server
	I0120 12:26:25.913544  989833 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:26:25.913872  989833 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:26:25.913894  989833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:26:25.913903  989833 main.go:141] libmachine: Making call to close driver server
	I0120 12:26:25.913913  989833 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:26:25.913989  989833 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:26:25.914039  989833 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:26:25.914055  989833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:26:25.914064  989833 main.go:141] libmachine: Making call to close driver server
	I0120 12:26:25.914071  989833 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:26:25.915616  989833 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:26:25.915626  989833 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:26:25.915634  989833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:26:25.915640  989833 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:26:25.915665  989833 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:26:25.915672  989833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:26:25.915727  989833 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:26:25.929614  989833 main.go:141] libmachine: Making call to close driver server
	I0120 12:26:25.929634  989833 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:26:25.929864  989833 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:26:25.929937  989833 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:26:25.929957  989833 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:26:25.932446  989833 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 12:26:24.483671  990580 machine.go:93] provisionDockerMachine start ...
	I0120 12:26:24.483698  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:24.483917  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:24.487091  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.487572  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:24.487616  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.487850  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:24.488035  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.488203  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.488308  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:24.488496  990580 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:24.488678  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:26:24.488688  990580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:26:24.617462  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-049625
	
	I0120 12:26:24.617552  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:26:24.617797  990580 buildroot.go:166] provisioning hostname "kubernetes-upgrade-049625"
	I0120 12:26:24.617835  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:26:24.618064  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:24.621360  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.621873  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:24.621903  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.622146  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:24.622338  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.622527  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.622673  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:24.622874  990580 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:24.623108  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:26:24.623123  990580 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-049625 && echo "kubernetes-upgrade-049625" | sudo tee /etc/hostname
	I0120 12:26:24.784125  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-049625
	
	I0120 12:26:24.784166  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:24.787682  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.788122  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:24.788150  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.788496  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:24.788723  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.788884  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:24.789056  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:24.789306  990580 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:24.789544  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:26:24.789600  990580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-049625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-049625/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-049625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:26:24.908546  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:26:24.908634  990580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:26:24.908664  990580 buildroot.go:174] setting up certificates
	I0120 12:26:24.908677  990580 provision.go:84] configureAuth start
	I0120 12:26:24.908692  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetMachineName
	I0120 12:26:24.908975  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:26:24.912563  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.912986  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:24.913012  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.913241  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:24.915964  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.916329  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:24.916365  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:24.916542  990580 provision.go:143] copyHostCerts
	I0120 12:26:24.916608  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:26:24.916632  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:26:24.916700  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:26:24.916817  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:26:24.916828  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:26:24.916860  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:26:24.916944  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:26:24.916955  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:26:24.916982  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:26:24.917050  990580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-049625 san=[127.0.0.1 192.168.72.97 kubernetes-upgrade-049625 localhost minikube]
	I0120 12:26:25.072922  990580 provision.go:177] copyRemoteCerts
	I0120 12:26:25.072999  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:26:25.073034  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:25.076061  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:25.076520  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:25.076556  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:25.076767  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:25.077025  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:25.077209  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:25.077384  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:26:25.170899  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 12:26:25.203889  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:26:25.236693  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:26:25.267252  990580 provision.go:87] duration metric: took 358.558066ms to configureAuth
	I0120 12:26:25.267291  990580 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:26:25.267526  990580 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:26:25.267628  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:25.270791  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:25.271206  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:25.271235  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:25.271477  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:25.271683  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:25.271916  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:25.272091  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:25.272272  990580 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:25.272502  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:26:25.272527  990580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:26:26.178758  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:26:26.178793  990580 machine.go:96] duration metric: took 1.695103167s to provisionDockerMachine
	I0120 12:26:26.178811  990580 start.go:293] postStartSetup for "kubernetes-upgrade-049625" (driver="kvm2")
	I0120 12:26:26.178826  990580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:26:26.178852  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:26.179255  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:26:26.179302  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:26.182395  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.182942  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:26.182972  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.183197  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:26.183428  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:26.183675  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:26.183946  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:26:26.299462  990580 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:26:26.308655  990580 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:26:26.308683  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:26:26.308745  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:26:26.308816  990580 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:26:26.308900  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:26:26.361122  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:26:26.426322  990580 start.go:296] duration metric: took 247.490856ms for postStartSetup
	I0120 12:26:26.426383  990580 fix.go:56] duration metric: took 1.964527806s for fixHost
	I0120 12:26:26.426415  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:26.429847  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.430331  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:26.430403  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.430648  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:26.430876  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:26.431095  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:26.431272  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:26.431469  990580 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:26.431655  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.97 22 <nil> <nil>}
	I0120 12:26:26.431680  990580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:26:26.624214  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375986.613191929
	
	I0120 12:26:26.624244  990580 fix.go:216] guest clock: 1737375986.613191929
	I0120 12:26:26.624252  990580 fix.go:229] Guest: 2025-01-20 12:26:26.613191929 +0000 UTC Remote: 2025-01-20 12:26:26.426390466 +0000 UTC m=+2.129941687 (delta=186.801463ms)
	I0120 12:26:26.624274  990580 fix.go:200] guest clock delta is within tolerance: 186.801463ms
	I0120 12:26:26.624280  990580 start.go:83] releasing machines lock for "kubernetes-upgrade-049625", held for 2.162446529s
	I0120 12:26:26.624301  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:26.624574  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:26:26.627927  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.628397  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:26.628434  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.628676  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:26.629155  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:26.629361  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .DriverName
	I0120 12:26:26.629452  990580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:26:26.629524  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:26.629580  990580 ssh_runner.go:195] Run: cat /version.json
	I0120 12:26:26.629615  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHHostname
	I0120 12:26:26.632552  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.632761  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.632986  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:26.633017  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.633213  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:26.633218  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:26.633243  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:26.633397  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHPort
	I0120 12:26:26.633418  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:26.633551  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHKeyPath
	I0120 12:26:26.633635  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:26.633710  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetSSHUsername
	I0120 12:26:26.633971  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:26:26.633977  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kubernetes-upgrade-049625/id_rsa Username:docker}
	I0120 12:26:26.817695  990580 ssh_runner.go:195] Run: systemctl --version
	I0120 12:26:26.830475  990580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:26:27.011015  990580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:26:27.018538  990580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:26:27.018607  990580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:26:27.028109  990580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 12:26:27.028131  990580 start.go:495] detecting cgroup driver to use...
	I0120 12:26:27.028186  990580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:26:27.045751  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:26:27.061364  990580 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:26:27.061410  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:26:27.073654  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:26:27.085769  990580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:26:27.237644  990580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:26:27.430044  990580 docker.go:233] disabling docker service ...
	I0120 12:26:27.430136  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:26:27.447748  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:26:27.461709  990580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:26:27.622723  990580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:26:27.796289  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:26:27.812015  990580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:26:27.838229  990580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:26:27.838305  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.851774  990580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:26:27.851845  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.864863  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.878554  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.891650  990580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:26:27.901645  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.916491  990580 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.927741  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:26:27.937705  990580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:26:27.950614  990580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:26:27.961489  990580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:26:28.124239  990580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:26:28.421730  990580 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:26:28.421813  990580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:26:28.426929  990580 start.go:563] Will wait 60s for crictl version
	I0120 12:26:28.427000  990580 ssh_runner.go:195] Run: which crictl
	I0120 12:26:28.430886  990580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:26:28.507612  990580 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:26:28.507698  990580 ssh_runner.go:195] Run: crio --version
	I0120 12:26:28.566501  990580 ssh_runner.go:195] Run: crio --version
	I0120 12:26:28.651511  990580 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:26:28.652782  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) Calling .GetIP
	I0120 12:26:28.656753  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:28.657315  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:c9:93", ip: ""} in network mk-kubernetes-upgrade-049625: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:26 +0000 UTC Type:0 Mac:52:54:00:6b:c9:93 Iaid: IPaddr:192.168.72.97 Prefix:24 Hostname:kubernetes-upgrade-049625 Clientid:01:52:54:00:6b:c9:93}
	I0120 12:26:28.657346  990580 main.go:141] libmachine: (kubernetes-upgrade-049625) DBG | domain kubernetes-upgrade-049625 has defined IP address 192.168.72.97 and MAC address 52:54:00:6b:c9:93 in network mk-kubernetes-upgrade-049625
	I0120 12:26:28.657671  990580 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 12:26:28.671273  990580 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-049625 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:26:28.671436  990580 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:26:28.671511  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:26:28.760296  990580 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:26:28.760326  990580 crio.go:433] Images already preloaded, skipping extraction
	I0120 12:26:28.760393  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:26:28.800373  990580 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:26:28.800397  990580 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:26:28.800406  990580 kubeadm.go:934] updating node { 192.168.72.97 8443 v1.32.0 crio true true} ...
	I0120 12:26:28.800519  990580 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-049625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-049625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:26:28.800641  990580 ssh_runner.go:195] Run: crio config
	I0120 12:26:28.848253  990580 cni.go:84] Creating CNI manager for ""
	I0120 12:26:28.848277  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:26:28.848292  990580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:26:28.848323  990580 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.97 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-049625 NodeName:kubernetes-upgrade-049625 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:26:28.848490  990580 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-049625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.97"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.97"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:26:28.848566  990580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:26:28.862479  990580 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:26:28.862578  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:26:28.873448  990580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0120 12:26:28.889238  990580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:26:28.905428  990580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0120 12:26:28.920712  990580 ssh_runner.go:195] Run: grep 192.168.72.97	control-plane.minikube.internal$ /etc/hosts
	I0120 12:26:28.924551  990580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:26:29.042817  990580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:26:29.057412  990580 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625 for IP: 192.168.72.97
	I0120 12:26:29.057432  990580 certs.go:194] generating shared ca certs ...
	I0120 12:26:29.057448  990580 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:26:29.057589  990580 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:26:29.057625  990580 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:26:29.057633  990580 certs.go:256] generating profile certs ...
	I0120 12:26:29.057706  990580 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/client.key
	I0120 12:26:29.057748  990580 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key.0c1e7817
	I0120 12:26:29.057794  990580 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key
	I0120 12:26:29.057934  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:26:29.057976  990580 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:26:29.057989  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:26:29.058022  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:26:29.058058  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:26:29.058084  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:26:29.058134  990580 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:26:29.058748  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:26:29.081655  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:26:29.108220  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:26:29.138025  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:26:29.163901  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 12:26:29.193862  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:26:29.221783  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:26:29.245147  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kubernetes-upgrade-049625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:26:29.268133  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:26:29.290537  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:26:29.314282  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:26:29.337690  990580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:26:25.933962  989833 addons.go:514] duration metric: took 1.876106884s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 12:26:27.325990  989833 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:26:29.326430  989833 pod_ready.go:93] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:29.326469  989833 pod_ready.go:82] duration metric: took 4.00736085s for pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:29.326484  989833 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qbzt4" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:29.328207  989833 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-qbzt4" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qbzt4" not found
	I0120 12:26:29.328231  989833 pod_ready.go:82] duration metric: took 1.738579ms for pod "coredns-668d6bf9bc-qbzt4" in "kube-system" namespace to be "Ready" ...
	E0120 12:26:29.328240  989833 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-qbzt4" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qbzt4" not found
	I0120 12:26:29.328248  989833 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:29.810810  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:26:29.811165  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:26:30.833823  989833 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:30.833859  989833 pod_ready.go:82] duration metric: took 1.505602853s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.833875  989833 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.839004  989833 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:30.839025  989833 pod_ready.go:82] duration metric: took 5.141459ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.839036  989833 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.843415  989833 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:30.843438  989833 pod_ready.go:82] duration metric: took 4.394767ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.843451  989833 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7lgg" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.848021  989833 pod_ready.go:93] pod "kube-proxy-h7lgg" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:30.848044  989833 pod_ready.go:82] duration metric: took 4.584263ms for pod "kube-proxy-h7lgg" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:30.848055  989833 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:31.123162  989833 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:26:31.123188  989833 pod_ready.go:82] duration metric: took 275.125313ms for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:26:31.123197  989833 pod_ready.go:39] duration metric: took 5.815414299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:26:31.123218  989833 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:26:31.123277  989833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:26:31.137192  989833 api_server.go:72] duration metric: took 7.079250931s to wait for apiserver process to appear ...
	I0120 12:26:31.137222  989833 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:26:31.137248  989833 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:26:31.144289  989833 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:26:31.145308  989833 api_server.go:141] control plane version: v1.32.0
	I0120 12:26:31.145336  989833 api_server.go:131] duration metric: took 8.105534ms to wait for apiserver health ...
	I0120 12:26:31.145346  989833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:26:31.325384  989833 system_pods.go:59] 7 kube-system pods found
	I0120 12:26:31.325412  989833 system_pods.go:61] "coredns-668d6bf9bc-nrl8n" [8a924671-ef5f-4efb-be07-58824ff7e7f6] Running
	I0120 12:26:31.325417  989833 system_pods.go:61] "etcd-no-preload-496524" [51f31b28-82e0-46d2-8f45-07078da530f3] Running
	I0120 12:26:31.325421  989833 system_pods.go:61] "kube-apiserver-no-preload-496524" [37958fd0-c411-475d-a095-2733098d47fe] Running
	I0120 12:26:31.325426  989833 system_pods.go:61] "kube-controller-manager-no-preload-496524" [c0046a1c-0a48-497b-a4f4-c53bc93d4cab] Running
	I0120 12:26:31.325429  989833 system_pods.go:61] "kube-proxy-h7lgg" [d97db720-de91-45f1-a949-a81addecd5b5] Running
	I0120 12:26:31.325433  989833 system_pods.go:61] "kube-scheduler-no-preload-496524" [670dc471-ba5e-4c30-ad95-96fca84b5297] Running
	I0120 12:26:31.325438  989833 system_pods.go:61] "storage-provisioner" [a882a790-0ba0-4cef-87cf-5ee521ea4c45] Running
	I0120 12:26:31.325445  989833 system_pods.go:74] duration metric: took 180.090485ms to wait for pod list to return data ...
	I0120 12:26:31.325453  989833 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:26:31.524275  989833 default_sa.go:45] found service account: "default"
	I0120 12:26:31.524306  989833 default_sa.go:55] duration metric: took 198.846874ms for default service account to be created ...
	I0120 12:26:31.524315  989833 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:26:31.726284  989833 system_pods.go:87] 7 kube-system pods found
	I0120 12:26:31.924056  989833 system_pods.go:105] "coredns-668d6bf9bc-nrl8n" [8a924671-ef5f-4efb-be07-58824ff7e7f6] Running
	I0120 12:26:31.924083  989833 system_pods.go:105] "etcd-no-preload-496524" [51f31b28-82e0-46d2-8f45-07078da530f3] Running
	I0120 12:26:31.924091  989833 system_pods.go:105] "kube-apiserver-no-preload-496524" [37958fd0-c411-475d-a095-2733098d47fe] Running
	I0120 12:26:31.924098  989833 system_pods.go:105] "kube-controller-manager-no-preload-496524" [c0046a1c-0a48-497b-a4f4-c53bc93d4cab] Running
	I0120 12:26:31.924111  989833 system_pods.go:105] "kube-proxy-h7lgg" [d97db720-de91-45f1-a949-a81addecd5b5] Running
	I0120 12:26:31.924117  989833 system_pods.go:105] "kube-scheduler-no-preload-496524" [670dc471-ba5e-4c30-ad95-96fca84b5297] Running
	I0120 12:26:31.924123  989833 system_pods.go:105] "storage-provisioner" [a882a790-0ba0-4cef-87cf-5ee521ea4c45] Running
	I0120 12:26:31.924134  989833 system_pods.go:147] duration metric: took 399.810965ms to wait for k8s-apps to be running ...
	I0120 12:26:31.924143  989833 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:26:31.924199  989833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:26:31.939917  989833 system_svc.go:56] duration metric: took 15.76405ms WaitForService to wait for kubelet
	I0120 12:26:31.939949  989833 kubeadm.go:582] duration metric: took 7.882016428s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:26:31.939969  989833 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:26:32.123814  989833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:26:32.123854  989833 node_conditions.go:123] node cpu capacity is 2
	I0120 12:26:32.123869  989833 node_conditions.go:105] duration metric: took 183.894376ms to run NodePressure ...
	I0120 12:26:32.123885  989833 start.go:241] waiting for startup goroutines ...
	I0120 12:26:32.123895  989833 start.go:246] waiting for cluster config update ...
	I0120 12:26:32.123909  989833 start.go:255] writing updated cluster config ...
	I0120 12:26:32.124260  989833 ssh_runner.go:195] Run: rm -f paused
	I0120 12:26:32.188656  989833 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:26:32.190711  989833 out.go:177] * Done! kubectl is now configured to use "no-preload-496524" cluster and "default" namespace by default
	I0120 12:26:29.354191  990580 ssh_runner.go:195] Run: openssl version
	I0120 12:26:29.359301  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:26:29.369089  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:29.373560  990580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:29.373598  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:29.378865  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:26:29.387329  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:26:29.397138  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:26:29.401394  990580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:26:29.401440  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:26:29.406665  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:26:29.415848  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:26:29.426304  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:26:29.430679  990580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:26:29.430717  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:26:29.436058  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:26:29.448161  990580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:26:29.452974  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:26:29.458286  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:26:29.463559  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:26:29.468560  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:26:29.478475  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:26:29.498710  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:26:29.503980  990580 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-049625 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-049625 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.97 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:29.504075  990580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:26:29.504123  990580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:26:29.538869  990580 cri.go:89] found id: "6dfdfa63e2b2fb6ee34fd1deed6112eb6600e858843f0eb43ab85623f3b554c2"
	I0120 12:26:29.538897  990580 cri.go:89] found id: "32902742c1f2320e57b8909339778c426f35e5d240e05c18470b39302e49f733"
	I0120 12:26:29.538903  990580 cri.go:89] found id: "a4008a41b2b698a7b310ecb212ba448236507a233c670770daaa19dd7522bad1"
	I0120 12:26:29.538906  990580 cri.go:89] found id: "8440c321ef48219218f10ae61c903aca2377decb5e5f3274b6b946ac2d2f3e5e"
	I0120 12:26:29.538909  990580 cri.go:89] found id: ""
	I0120 12:26:29.538964  990580 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-049625 -n kubernetes-upgrade-049625
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-049625 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-proxy-cgqqs storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-049625 describe pod kube-proxy-cgqqs storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-049625 describe pod kube-proxy-cgqqs storage-provisioner: exit status 1 (80.203113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-proxy-cgqqs" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-049625 describe pod kube-proxy-cgqqs storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-049625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-049625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-049625: (1.151465661s)
--- FAIL: TestKubernetesUpgrade (382.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-298045 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-298045 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.700351779s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-298045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-298045" primary control-plane node in "pause-298045" cluster
	* Updating the running kvm2 "pause-298045" VM ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-298045" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:21:47.505747  984209 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:21:47.506083  984209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:47.506094  984209 out.go:358] Setting ErrFile to fd 2...
	I0120 12:21:47.506101  984209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:47.506385  984209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:21:47.507109  984209 out.go:352] Setting JSON to false
	I0120 12:21:47.508515  984209 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18250,"bootTime":1737357457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:21:47.508628  984209 start.go:139] virtualization: kvm guest
	I0120 12:21:47.511205  984209 out.go:177] * [pause-298045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:21:47.512784  984209 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:21:47.512780  984209 notify.go:220] Checking for updates...
	I0120 12:21:47.515305  984209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:21:47.516700  984209 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:21:47.517939  984209 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:47.519446  984209 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:21:47.520861  984209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:21:47.522787  984209 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:47.523429  984209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:47.523547  984209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:47.545888  984209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0120 12:21:47.546590  984209 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:47.547448  984209 main.go:141] libmachine: Using API Version  1
	I0120 12:21:47.547473  984209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:47.547915  984209 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:47.548341  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.548733  984209 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:21:47.549150  984209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:47.549190  984209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:47.574646  984209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0120 12:21:47.576328  984209 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:47.576967  984209 main.go:141] libmachine: Using API Version  1
	I0120 12:21:47.576996  984209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:47.577425  984209 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:47.577659  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.622643  984209 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:21:47.623969  984209 start.go:297] selected driver: kvm2
	I0120 12:21:47.623990  984209 start.go:901] validating driver "kvm2" against &{Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:47.624184  984209 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:21:47.624649  984209 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:47.624750  984209 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:21:47.648915  984209 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:21:47.649860  984209 cni.go:84] Creating CNI manager for ""
	I0120 12:21:47.649912  984209 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:47.649977  984209 start.go:340] cluster config:
	{Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:47.650145  984209 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:47.651959  984209 out.go:177] * Starting "pause-298045" primary control-plane node in "pause-298045" cluster
	I0120 12:21:47.653275  984209 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:21:47.653325  984209 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:21:47.653335  984209 cache.go:56] Caching tarball of preloaded images
	I0120 12:21:47.653446  984209 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:21:47.653460  984209 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:21:47.653600  984209 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/config.json ...
	I0120 12:21:47.653833  984209 start.go:360] acquireMachinesLock for pause-298045: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:21:47.653884  984209 start.go:364] duration metric: took 28.989µs to acquireMachinesLock for "pause-298045"
	I0120 12:21:47.653901  984209 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:21:47.653908  984209 fix.go:54] fixHost starting: 
	I0120 12:21:47.654298  984209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:47.654350  984209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:47.674111  984209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0120 12:21:47.674702  984209 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:47.675310  984209 main.go:141] libmachine: Using API Version  1
	I0120 12:21:47.675333  984209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:47.675692  984209 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:47.675955  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.676179  984209 main.go:141] libmachine: (pause-298045) Calling .GetState
	I0120 12:21:47.678003  984209 fix.go:112] recreateIfNeeded on pause-298045: state=Running err=<nil>
	W0120 12:21:47.678040  984209 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:21:47.680106  984209 out.go:177] * Updating the running kvm2 "pause-298045" VM ...
	I0120 12:21:47.681401  984209 machine.go:93] provisionDockerMachine start ...
	I0120 12:21:47.681424  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.681617  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.683910  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684586  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.684653  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684890  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.686660  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686863  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686972  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.687090  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.687322  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.687334  984209 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:21:47.806065  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.806100  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807072  984209 buildroot.go:166] provisioning hostname "pause-298045"
	I0120 12:21:47.807124  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807393  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.811076  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811629  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.811662  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811962  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.812161  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812323  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812487  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.812680  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.812976  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.812999  984209 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-298045 && echo "pause-298045" | sudo tee /etc/hostname
	I0120 12:21:47.944902  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.944934  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.948698  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949220  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.949288  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949825  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.950133  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950484  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.950715  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.950962  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.951029  984209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-298045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-298045/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-298045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:21:48.075875  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:21:48.075917  984209 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:21:48.075969  984209 buildroot.go:174] setting up certificates
	I0120 12:21:48.075979  984209 provision.go:84] configureAuth start
	I0120 12:21:48.076001  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:48.076319  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:48.079748  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080268  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.080316  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080512  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.083503  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.083939  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.083967  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.084143  984209 provision.go:143] copyHostCerts
	I0120 12:21:48.084222  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:21:48.084266  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:21:48.084336  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:21:48.084492  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:21:48.084522  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:21:48.084556  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:21:48.084635  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:21:48.084654  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:21:48.084679  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:21:48.084820  984209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.pause-298045 san=[127.0.0.1 192.168.50.60 localhost minikube pause-298045]
	I0120 12:21:48.324701  984209 provision.go:177] copyRemoteCerts
	I0120 12:21:48.324775  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:21:48.324821  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.327899  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328190  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.328228  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328525  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.328798  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.328980  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.329139  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:48.423464  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:21:48.454560  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0120 12:21:48.481363  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:21:48.506936  984209 provision.go:87] duration metric: took 430.937393ms to configureAuth
	I0120 12:21:48.506960  984209 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:21:48.507111  984209 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:48.507174  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.510005  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510510  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.510562  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510832  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.511011  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511167  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511351  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.511515  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:48.511718  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:48.511743  984209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:21:55.184816  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:21:55.184850  984209 machine.go:96] duration metric: took 7.503431773s to provisionDockerMachine
	I0120 12:21:55.184866  984209 start.go:293] postStartSetup for "pause-298045" (driver="kvm2")
	I0120 12:21:55.184879  984209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:21:55.184906  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.185298  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:21:55.185331  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.188337  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.188649  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.188680  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.189413  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.190871  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.191196  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.191415  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.278122  984209 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:21:55.282651  984209 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:21:55.282682  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:21:55.282769  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:21:55.282853  984209 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:21:55.282942  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:21:55.292898  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:55.320838  984209 start.go:296] duration metric: took 135.954837ms for postStartSetup
	I0120 12:21:55.320887  984209 fix.go:56] duration metric: took 7.666979647s for fixHost
	I0120 12:21:55.320914  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.324372  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.324852  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.324880  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.325110  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.325319  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325526  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325711  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.325905  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:55.326130  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:55.326148  984209 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:21:55.431197  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375715.391020400
	
	I0120 12:21:55.431224  984209 fix.go:216] guest clock: 1737375715.391020400
	I0120 12:21:55.431235  984209 fix.go:229] Guest: 2025-01-20 12:21:55.3910204 +0000 UTC Remote: 2025-01-20 12:21:55.320893381 +0000 UTC m=+7.867264972 (delta=70.127019ms)
	I0120 12:21:55.431263  984209 fix.go:200] guest clock delta is within tolerance: 70.127019ms
	I0120 12:21:55.431270  984209 start.go:83] releasing machines lock for "pause-298045", held for 7.777375463s
	I0120 12:21:55.431307  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.431605  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:55.434885  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435333  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.435390  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435554  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436171  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436380  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436473  984209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:21:55.436542  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.436803  984209 ssh_runner.go:195] Run: cat /version.json
	I0120 12:21:55.436834  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.439617  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.439952  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.439981  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440152  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.440212  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.440519  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.440716  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.440731  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.440759  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440943  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.441108  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.441267  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.441453  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.547694  984209 ssh_runner.go:195] Run: systemctl --version
	I0120 12:21:55.555288  984209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:21:55.714394  984209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:21:55.737530  984209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:21:55.737628  984209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:21:55.754466  984209 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 12:21:55.754497  984209 start.go:495] detecting cgroup driver to use...
	I0120 12:21:55.754588  984209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:21:55.785063  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:21:55.810833  984209 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:21:55.810909  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:21:55.850702  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:21:55.874109  984209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:21:56.068672  984209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:21:56.234848  984209 docker.go:233] disabling docker service ...
	I0120 12:21:56.234920  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:21:56.259943  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:56.273211  984209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:56.504079  984209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:56.771090  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:56.807702  984209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:56.849556  984209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:21:56.849619  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.915758  984209 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:56.915843  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.961546  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.000209  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.056192  984209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:57.109297  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.143033  984209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.156545  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.207048  984209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:57.251961  984209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:57.279286  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:57.494183  984209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:58.221635  984209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:58.221717  984209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:58.228428  984209 start.go:563] Will wait 60s for crictl version
	I0120 12:21:58.228493  984209 ssh_runner.go:195] Run: which crictl
	I0120 12:21:58.233479  984209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:58.283380  984209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:21:58.283476  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.319657  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.357799  984209 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:21:58.359130  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:58.362846  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363249  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:58.363277  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363484  984209 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:58.368125  984209 kubeadm.go:883] updating cluster {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:21:58.368276  984209 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:21:58.368320  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.436082  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.436112  984209 crio.go:433] Images already preloaded, skipping extraction
	I0120 12:21:58.436178  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.483739  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.483767  984209 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:21:58.483780  984209 kubeadm.go:934] updating node { 192.168.50.60 8443 v1.32.0 crio true true} ...
	I0120 12:21:58.483916  984209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-298045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:21:58.484007  984209 ssh_runner.go:195] Run: crio config
	I0120 12:21:58.538699  984209 cni.go:84] Creating CNI manager for ""
	I0120 12:21:58.538723  984209 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:58.538736  984209 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:21:58.538768  984209 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-298045 NodeName:pause-298045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:21:58.538956  984209 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-298045"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.60"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:21:58.539039  984209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:21:58.551841  984209 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:21:58.551919  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:21:58.563410  984209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0120 12:21:58.582875  984209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:21:58.602853  984209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0120 12:21:58.627254  984209 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0120 12:21:58.632549  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:58.798793  984209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:21:58.825671  984209 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045 for IP: 192.168.50.60
	I0120 12:21:58.825704  984209 certs.go:194] generating shared ca certs ...
	I0120 12:21:58.825728  984209 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:58.825932  984209 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:21:58.826004  984209 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:21:58.826021  984209 certs.go:256] generating profile certs ...
	I0120 12:21:58.826158  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/client.key
	I0120 12:21:58.826251  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key.7d49e320
	I0120 12:21:58.826318  984209 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key
	I0120 12:21:58.826474  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:21:58.826547  984209 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:21:58.826566  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:21:58.826602  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:21:58.826637  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:21:58.826675  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:21:58.826736  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:58.827584  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:21:58.909041  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:21:58.948606  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:21:59.012282  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:21:59.109695  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 12:21:59.238343  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:21:59.299284  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:21:59.334101  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:21:59.374934  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:21:59.414484  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:21:59.444169  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:21:59.467884  984209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:21:59.490444  984209 ssh_runner.go:195] Run: openssl version
	I0120 12:21:59.507626  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:21:59.520255  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524911  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524971  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.531352  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:21:59.543105  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:21:59.555663  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561820  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561864  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.569243  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:21:59.581486  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:21:59.592694  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597427  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597478  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.604688  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:21:59.616268  984209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:21:59.626223  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:21:59.632273  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:21:59.637690  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:21:59.643343  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:21:59.649192  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:21:59.654933  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:21:59.660489  984209 kubeadm.go:392] StartCluster: {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:59.660630  984209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:21:59.660679  984209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:21:59.702703  984209 cri.go:89] found id: "7522111a81eb592884102e40c70c96a442aacd7ead34e623d6d75cb0047e54a1"
	I0120 12:21:59.702723  984209 cri.go:89] found id: "87ea2b78f22aa3f634c75f73e4ff59c82419e70bcabcbf38ac3cd2cff94e916e"
	I0120 12:21:59.702727  984209 cri.go:89] found id: "97c209a11074c58552c075bb6b27e8d296987fd3a0b46a09585dcfc690275572"
	I0120 12:21:59.702730  984209 cri.go:89] found id: "f400c9e5b8388764e549d978dee73e17cb00cf3f100ab6ebfd3b553e155860ba"
	I0120 12:21:59.702733  984209 cri.go:89] found id: "97779b8bb3c647064d9431c5881d2fd1d07f9924cefcb8010cf1b47b282e8191"
	I0120 12:21:59.702736  984209 cri.go:89] found id: "38e061ad131bdad6beb214a78c3d8be29d96f19162a9861a6704003a916763f8"
	I0120 12:21:59.702739  984209 cri.go:89] found id: ""
	I0120 12:21:59.702780  984209 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-298045 -n pause-298045
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-298045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-298045 logs -n 25: (1.323320215s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC | 20 Jan 25 12:17 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC | 20 Jan 25 12:18 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:19 UTC |
	| start   | -p offline-crio-348074         | offline-crio-348074       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-414382    | force-systemd-env-414382  | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-438919      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-348074         | offline-crio-348074       | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:20 UTC |
	| start   | -p pause-298045 --memory=2048  | pause-298045              | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:21 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-414382    | force-systemd-env-414382  | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:20 UTC |
	| start   | -p kubernetes-upgrade-049625   | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-438919      | running-upgrade-438919    | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-298045                | pause-298045              | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC | 20 Jan 25 12:22 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC | 20 Jan 25 12:21 UTC |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:21:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:21:49.469376  984358 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:21:49.469459  984358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:49.469462  984358 out.go:358] Setting ErrFile to fd 2...
	I0120 12:21:49.469465  984358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:49.470097  984358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:21:49.471314  984358 out.go:352] Setting JSON to false
	I0120 12:21:49.472624  984358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18252,"bootTime":1737357457,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:21:49.472753  984358 start.go:139] virtualization: kvm guest
	I0120 12:21:49.474796  984358 out.go:177] * [NoKubernetes-378897] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:21:49.476697  984358 notify.go:220] Checking for updates...
	I0120 12:21:49.476784  984358 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:21:49.478390  984358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:21:49.480195  984358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:21:49.482104  984358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:49.483600  984358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:21:49.485086  984358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:21:49.487248  984358 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:21:49.487388  984358 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:49.487505  984358 config.go:182] Loaded profile config "running-upgrade-438919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 12:21:49.487528  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.487667  984358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:21:49.524155  984358 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:21:49.525444  984358 start.go:297] selected driver: kvm2
	I0120 12:21:49.525453  984358 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:21:49.525462  984358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:21:49.525729  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.525781  984358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:49.525852  984358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:21:49.541486  984358 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:21:49.541538  984358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:21:49.542024  984358 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 12:21:49.542153  984358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:21:49.542173  984358 cni.go:84] Creating CNI manager for ""
	I0120 12:21:49.542233  984358 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:49.542243  984358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:21:49.542275  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.542317  984358 start.go:340] cluster config:
	{Name:NoKubernetes-378897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-378897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:49.542425  984358 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:49.544219  984358 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-378897
	I0120 12:21:47.620592  983974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:47.641614  983974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:47.824434  983974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:48.054866  983974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:48.069565  983974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:48.097950  983974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0120 12:21:48.098007  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.107830  983974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:48.107893  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.118595  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.127838  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.137932  983974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:48.146820  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.156363  983974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.174311  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.183957  983974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:48.194783  983974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:48.208814  983974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:48.405846  983974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:50.830211  983974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.424323062s)
	I0120 12:21:50.830244  983974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:50.830304  983974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:50.834515  983974 start.go:563] Will wait 60s for crictl version
	I0120 12:21:50.834598  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:50.840022  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:50.863834  983974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0120 12:21:50.863918  983974 ssh_runner.go:195] Run: crio --version
	I0120 12:21:50.896361  983974 ssh_runner.go:195] Run: crio --version
	I0120 12:21:50.927328  983974 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0120 12:21:49.145828  983149 out.go:235]   - Booting up control plane ...
	I0120 12:21:49.145964  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:21:49.153153  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:21:49.153237  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:21:49.153897  983149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:21:49.160318  983149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:21:47.681401  984209 machine.go:93] provisionDockerMachine start ...
	I0120 12:21:47.681424  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.681617  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.683910  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684586  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.684653  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684890  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.686660  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686863  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686972  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.687090  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.687322  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.687334  984209 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:21:47.806065  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.806100  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807072  984209 buildroot.go:166] provisioning hostname "pause-298045"
	I0120 12:21:47.807124  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807393  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.811076  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811629  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.811662  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811962  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.812161  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812323  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812487  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.812680  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.812976  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.812999  984209 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-298045 && echo "pause-298045" | sudo tee /etc/hostname
	I0120 12:21:47.944902  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.944934  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.948698  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949220  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.949288  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949825  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.950133  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950484  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.950715  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.950962  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.951029  984209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-298045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-298045/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-298045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:21:48.075875  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:21:48.075917  984209 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:21:48.075969  984209 buildroot.go:174] setting up certificates
	I0120 12:21:48.075979  984209 provision.go:84] configureAuth start
	I0120 12:21:48.076001  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:48.076319  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:48.079748  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080268  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.080316  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080512  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.083503  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.083939  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.083967  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.084143  984209 provision.go:143] copyHostCerts
	I0120 12:21:48.084222  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:21:48.084266  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:21:48.084336  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:21:48.084492  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:21:48.084522  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:21:48.084556  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:21:48.084635  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:21:48.084654  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:21:48.084679  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:21:48.084820  984209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.pause-298045 san=[127.0.0.1 192.168.50.60 localhost minikube pause-298045]
	I0120 12:21:48.324701  984209 provision.go:177] copyRemoteCerts
	I0120 12:21:48.324775  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:21:48.324821  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.327899  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328190  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.328228  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328525  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.328798  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.328980  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.329139  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:48.423464  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:21:48.454560  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0120 12:21:48.481363  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:21:48.506936  984209 provision.go:87] duration metric: took 430.937393ms to configureAuth
	I0120 12:21:48.506960  984209 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:21:48.507111  984209 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:48.507174  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.510005  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510510  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.510562  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510832  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.511011  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511167  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511351  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.511515  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:48.511718  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:48.511743  984209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:21:50.928860  983974 main.go:141] libmachine: (running-upgrade-438919) Calling .GetIP
	I0120 12:21:50.931985  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | domain running-upgrade-438919 has defined MAC address 52:54:00:9a:d1:63 in network mk-running-upgrade-438919
	I0120 12:21:50.932361  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:d1:63", ip: ""} in network mk-running-upgrade-438919: {Iface:virbr1 ExpiryTime:2025-01-20 13:20:38 +0000 UTC Type:0 Mac:52:54:00:9a:d1:63 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:running-upgrade-438919 Clientid:01:52:54:00:9a:d1:63}
	I0120 12:21:50.932395  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | domain running-upgrade-438919 has defined IP address 192.168.39.240 and MAC address 52:54:00:9a:d1:63 in network mk-running-upgrade-438919
	I0120 12:21:50.932620  983974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:50.936064  983974 kubeadm.go:883] updating cluster {Name:running-upgrade-438919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-438919 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0120 12:21:50.936164  983974 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0120 12:21:50.936204  983974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:50.968456  983974 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0120 12:21:50.968515  983974 ssh_runner.go:195] Run: which lz4
	I0120 12:21:50.971914  983974 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:21:50.975767  983974 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:21:50.975793  983974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0120 12:21:49.545467  984358 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0120 12:21:50.471634  984358 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0120 12:21:50.471784  984358 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/NoKubernetes-378897/config.json ...
	I0120 12:21:50.471814  984358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/NoKubernetes-378897/config.json: {Name:mk0f9f7eb221ce2809a1e9b33745b57b779c73cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:50.471963  984358 start.go:360] acquireMachinesLock for NoKubernetes-378897: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:21:55.431385  984358 start.go:364] duration metric: took 4.959376725s to acquireMachinesLock for "NoKubernetes-378897"
	I0120 12:21:55.431436  984358 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-378897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-378
897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:21:55.431556  984358 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:21:55.184816  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:21:55.184850  984209 machine.go:96] duration metric: took 7.503431773s to provisionDockerMachine
	I0120 12:21:55.184866  984209 start.go:293] postStartSetup for "pause-298045" (driver="kvm2")
	I0120 12:21:55.184879  984209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:21:55.184906  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.185298  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:21:55.185331  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.188337  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.188649  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.188680  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.189413  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.190871  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.191196  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.191415  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.278122  984209 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:21:55.282651  984209 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:21:55.282682  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:21:55.282769  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:21:55.282853  984209 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:21:55.282942  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:21:55.292898  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:55.320838  984209 start.go:296] duration metric: took 135.954837ms for postStartSetup
	I0120 12:21:55.320887  984209 fix.go:56] duration metric: took 7.666979647s for fixHost
	I0120 12:21:55.320914  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.324372  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.324852  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.324880  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.325110  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.325319  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325526  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325711  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.325905  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:55.326130  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:55.326148  984209 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:21:55.431197  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375715.391020400
	
	I0120 12:21:55.431224  984209 fix.go:216] guest clock: 1737375715.391020400
	I0120 12:21:55.431235  984209 fix.go:229] Guest: 2025-01-20 12:21:55.3910204 +0000 UTC Remote: 2025-01-20 12:21:55.320893381 +0000 UTC m=+7.867264972 (delta=70.127019ms)
	I0120 12:21:55.431263  984209 fix.go:200] guest clock delta is within tolerance: 70.127019ms
	I0120 12:21:55.431270  984209 start.go:83] releasing machines lock for "pause-298045", held for 7.777375463s
	I0120 12:21:55.431307  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.431605  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:55.434885  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435333  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.435390  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435554  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436171  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436380  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436473  984209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:21:55.436542  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.436803  984209 ssh_runner.go:195] Run: cat /version.json
	I0120 12:21:55.436834  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.439617  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.439952  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.439981  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440152  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.440212  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.440519  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.440716  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.440731  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.440759  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440943  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.441108  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.441267  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.441453  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.547694  984209 ssh_runner.go:195] Run: systemctl --version
	I0120 12:21:55.555288  984209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:21:55.714394  984209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:21:55.737530  984209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:21:55.737628  984209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:21:55.754466  984209 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 12:21:55.754497  984209 start.go:495] detecting cgroup driver to use...
	I0120 12:21:55.754588  984209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:21:55.785063  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:21:55.810833  984209 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:21:55.810909  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:21:55.850702  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:21:55.874109  984209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:21:56.068672  984209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:21:56.234848  984209 docker.go:233] disabling docker service ...
	I0120 12:21:56.234920  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:21:56.259943  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:56.273211  984209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:56.504079  984209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:56.771090  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:56.807702  984209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:56.849556  984209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:21:56.849619  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.915758  984209 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:56.915843  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.961546  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.000209  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.056192  984209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:57.109297  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.143033  984209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.156545  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.207048  984209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:57.251961  984209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:57.279286  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:57.494183  984209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:52.878126  983974 crio.go:462] duration metric: took 1.906229198s to copy over tarball
	I0120 12:21:52.878215  983974 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:21:56.684295  983974 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.806038305s)
	I0120 12:21:56.684353  983974 crio.go:469] duration metric: took 3.806179935s to extract the tarball
	I0120 12:21:56.684366  983974 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:21:56.749618  983974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:56.788519  983974 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0120 12:21:56.788552  983974 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:21:56.788609  983974 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:56.788635  983974 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:56.788646  983974 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:56.788677  983974 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:56.788686  983974 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:56.788671  983974 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:56.788918  983974 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:56.788986  983974 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0120 12:21:56.790450  983974 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:56.790463  983974 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:56.790472  983974 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:56.790586  983974 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:56.790591  983974 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:56.790674  983974 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0120 12:21:56.790721  983974 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:56.790855  983974 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.009252  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.010909  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.018415  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.026186  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.026221  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.033293  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.042182  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0120 12:21:57.264881  983974 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0120 12:21:57.265011  983974 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.265103  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279351  983974 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0120 12:21:57.279403  983974 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.279453  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279465  983974 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0120 12:21:57.279506  983974 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.279548  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279580  983974 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0120 12:21:57.279615  983974 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.279655  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279655  983974 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0120 12:21:57.279732  983974 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.279762  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279782  983974 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0120 12:21:57.279813  983974 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.279842  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.279788  983974 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0120 12:21:57.279958  983974 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0120 12:21:57.279987  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.280012  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.292121  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.292272  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.295427  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.343770  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.343952  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.343971  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.344147  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:21:57.462391  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.462725  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.468801  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.526867  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.526957  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.527064  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:21:57.537896  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.585501  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:58.221635  984209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:58.221717  984209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:58.228428  984209 start.go:563] Will wait 60s for crictl version
	I0120 12:21:58.228493  984209 ssh_runner.go:195] Run: which crictl
	I0120 12:21:58.233479  984209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:58.283380  984209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:21:58.283476  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.319657  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.357799  984209 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:21:55.433782  984358 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0120 12:21:55.433988  984358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:55.434019  984358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:55.454707  984358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46845
	I0120 12:21:55.455315  984358 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:55.456004  984358 main.go:141] libmachine: Using API Version  1
	I0120 12:21:55.456067  984358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:55.456514  984358 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:55.456741  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .GetMachineName
	I0120 12:21:55.456934  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .DriverName
	I0120 12:21:55.457143  984358 start.go:159] libmachine.API.Create for "NoKubernetes-378897" (driver="kvm2")
	I0120 12:21:55.457165  984358 client.go:168] LocalClient.Create starting
	I0120 12:21:55.457201  984358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:21:55.457236  984358 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:55.457251  984358 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:55.457317  984358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:21:55.457337  984358 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:55.457348  984358 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:55.457365  984358 main.go:141] libmachine: Running pre-create checks...
	I0120 12:21:55.457373  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .PreCreateCheck
	I0120 12:21:55.457790  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .GetConfigRaw
	I0120 12:21:55.458296  984358 main.go:141] libmachine: Creating machine...
	I0120 12:21:55.458307  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .Create
	I0120 12:21:55.458495  984358 main.go:141] libmachine: (NoKubernetes-378897) creating KVM machine...
	I0120 12:21:55.458509  984358 main.go:141] libmachine: (NoKubernetes-378897) creating network...
	I0120 12:21:55.459873  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | found existing default KVM network
	I0120 12:21:55.461727  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.461520  984408 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:dd:5e} reservation:<nil>}
	I0120 12:21:55.463231  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.463139  984408 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:b2:5f} reservation:<nil>}
	I0120 12:21:55.465188  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.465095  984408 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000390bf0}
	I0120 12:21:55.465254  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | created network xml: 
	I0120 12:21:55.465268  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | <network>
	I0120 12:21:55.465284  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <name>mk-NoKubernetes-378897</name>
	I0120 12:21:55.465293  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <dns enable='no'/>
	I0120 12:21:55.465300  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   
	I0120 12:21:55.465307  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 12:21:55.465315  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |     <dhcp>
	I0120 12:21:55.465323  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 12:21:55.465331  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |     </dhcp>
	I0120 12:21:55.465336  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   </ip>
	I0120 12:21:55.465343  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   
	I0120 12:21:55.465348  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | </network>
	I0120 12:21:55.465357  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | 
	I0120 12:21:55.470786  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | trying to create private KVM network mk-NoKubernetes-378897 192.168.61.0/24...
	I0120 12:21:55.552688  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | private KVM network mk-NoKubernetes-378897 192.168.61.0/24 created
	I0120 12:21:55.552726  984358 main.go:141] libmachine: (NoKubernetes-378897) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 ...
	I0120 12:21:55.552747  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.552663  984408 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:55.552758  984358 main.go:141] libmachine: (NoKubernetes-378897) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:21:55.552867  984358 main.go:141] libmachine: (NoKubernetes-378897) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:21:55.946067  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.945906  984408 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/id_rsa...
	I0120 12:21:56.210132  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:56.209947  984408 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/NoKubernetes-378897.rawdisk...
	I0120 12:21:56.210153  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | Writing magic tar header
	I0120 12:21:56.210174  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | Writing SSH key tar header
	I0120 12:21:56.210236  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:56.210176  984408 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 ...
	I0120 12:21:56.210369  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897
	I0120 12:21:56.210410  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 (perms=drwx------)
	I0120 12:21:56.210426  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:21:56.210472  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:21:56.210503  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:21:56.210533  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:21:56.210546  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:21:56.210553  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:21:56.210563  984358 main.go:141] libmachine: (NoKubernetes-378897) creating domain...
	I0120 12:21:56.210575  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:56.210582  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:21:56.210591  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:21:56.210598  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins
	I0120 12:21:56.210606  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home
	I0120 12:21:56.210611  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | skipping /home - not owner
	I0120 12:21:56.212297  984358 main.go:141] libmachine: (NoKubernetes-378897) define libvirt domain using xml: 
	I0120 12:21:56.212321  984358 main.go:141] libmachine: (NoKubernetes-378897) <domain type='kvm'>
	I0120 12:21:56.212330  984358 main.go:141] libmachine: (NoKubernetes-378897)   <name>NoKubernetes-378897</name>
	I0120 12:21:56.212337  984358 main.go:141] libmachine: (NoKubernetes-378897)   <memory unit='MiB'>6000</memory>
	I0120 12:21:56.212354  984358 main.go:141] libmachine: (NoKubernetes-378897)   <vcpu>2</vcpu>
	I0120 12:21:56.212360  984358 main.go:141] libmachine: (NoKubernetes-378897)   <features>
	I0120 12:21:56.212371  984358 main.go:141] libmachine: (NoKubernetes-378897)     <acpi/>
	I0120 12:21:56.212375  984358 main.go:141] libmachine: (NoKubernetes-378897)     <apic/>
	I0120 12:21:56.212383  984358 main.go:141] libmachine: (NoKubernetes-378897)     <pae/>
	I0120 12:21:56.212388  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212395  984358 main.go:141] libmachine: (NoKubernetes-378897)   </features>
	I0120 12:21:56.212401  984358 main.go:141] libmachine: (NoKubernetes-378897)   <cpu mode='host-passthrough'>
	I0120 12:21:56.212408  984358 main.go:141] libmachine: (NoKubernetes-378897)   
	I0120 12:21:56.212413  984358 main.go:141] libmachine: (NoKubernetes-378897)   </cpu>
	I0120 12:21:56.212419  984358 main.go:141] libmachine: (NoKubernetes-378897)   <os>
	I0120 12:21:56.212425  984358 main.go:141] libmachine: (NoKubernetes-378897)     <type>hvm</type>
	I0120 12:21:56.212431  984358 main.go:141] libmachine: (NoKubernetes-378897)     <boot dev='cdrom'/>
	I0120 12:21:56.212436  984358 main.go:141] libmachine: (NoKubernetes-378897)     <boot dev='hd'/>
	I0120 12:21:56.212442  984358 main.go:141] libmachine: (NoKubernetes-378897)     <bootmenu enable='no'/>
	I0120 12:21:56.212446  984358 main.go:141] libmachine: (NoKubernetes-378897)   </os>
	I0120 12:21:56.212453  984358 main.go:141] libmachine: (NoKubernetes-378897)   <devices>
	I0120 12:21:56.212458  984358 main.go:141] libmachine: (NoKubernetes-378897)     <disk type='file' device='cdrom'>
	I0120 12:21:56.212470  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/boot2docker.iso'/>
	I0120 12:21:56.212477  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target dev='hdc' bus='scsi'/>
	I0120 12:21:56.212484  984358 main.go:141] libmachine: (NoKubernetes-378897)       <readonly/>
	I0120 12:21:56.212499  984358 main.go:141] libmachine: (NoKubernetes-378897)     </disk>
	I0120 12:21:56.212507  984358 main.go:141] libmachine: (NoKubernetes-378897)     <disk type='file' device='disk'>
	I0120 12:21:56.212516  984358 main.go:141] libmachine: (NoKubernetes-378897)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:21:56.212528  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/NoKubernetes-378897.rawdisk'/>
	I0120 12:21:56.212536  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target dev='hda' bus='virtio'/>
	I0120 12:21:56.212542  984358 main.go:141] libmachine: (NoKubernetes-378897)     </disk>
	I0120 12:21:56.212548  984358 main.go:141] libmachine: (NoKubernetes-378897)     <interface type='network'>
	I0120 12:21:56.212556  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source network='mk-NoKubernetes-378897'/>
	I0120 12:21:56.212564  984358 main.go:141] libmachine: (NoKubernetes-378897)       <model type='virtio'/>
	I0120 12:21:56.212570  984358 main.go:141] libmachine: (NoKubernetes-378897)     </interface>
	I0120 12:21:56.212576  984358 main.go:141] libmachine: (NoKubernetes-378897)     <interface type='network'>
	I0120 12:21:56.212583  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source network='default'/>
	I0120 12:21:56.212588  984358 main.go:141] libmachine: (NoKubernetes-378897)       <model type='virtio'/>
	I0120 12:21:56.212595  984358 main.go:141] libmachine: (NoKubernetes-378897)     </interface>
	I0120 12:21:56.212601  984358 main.go:141] libmachine: (NoKubernetes-378897)     <serial type='pty'>
	I0120 12:21:56.212608  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target port='0'/>
	I0120 12:21:56.212612  984358 main.go:141] libmachine: (NoKubernetes-378897)     </serial>
	I0120 12:21:56.212619  984358 main.go:141] libmachine: (NoKubernetes-378897)     <console type='pty'>
	I0120 12:21:56.212625  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target type='serial' port='0'/>
	I0120 12:21:56.212631  984358 main.go:141] libmachine: (NoKubernetes-378897)     </console>
	I0120 12:21:56.212636  984358 main.go:141] libmachine: (NoKubernetes-378897)     <rng model='virtio'>
	I0120 12:21:56.212644  984358 main.go:141] libmachine: (NoKubernetes-378897)       <backend model='random'>/dev/random</backend>
	I0120 12:21:56.212651  984358 main.go:141] libmachine: (NoKubernetes-378897)     </rng>
	I0120 12:21:56.212657  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212663  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212669  984358 main.go:141] libmachine: (NoKubernetes-378897)   </devices>
	I0120 12:21:56.212673  984358 main.go:141] libmachine: (NoKubernetes-378897) </domain>
	I0120 12:21:56.212685  984358 main.go:141] libmachine: (NoKubernetes-378897) 
	I0120 12:21:56.304952  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:0c:76:6e in network default
	I0120 12:21:56.305781  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:56.305881  984358 main.go:141] libmachine: (NoKubernetes-378897) starting domain...
	I0120 12:21:56.305900  984358 main.go:141] libmachine: (NoKubernetes-378897) ensuring networks are active...
	I0120 12:21:56.306917  984358 main.go:141] libmachine: (NoKubernetes-378897) Ensuring network default is active
	I0120 12:21:56.307399  984358 main.go:141] libmachine: (NoKubernetes-378897) Ensuring network mk-NoKubernetes-378897 is active
	I0120 12:21:56.308166  984358 main.go:141] libmachine: (NoKubernetes-378897) getting domain XML...
	I0120 12:21:56.309071  984358 main.go:141] libmachine: (NoKubernetes-378897) creating domain...
	I0120 12:21:58.109623  984358 main.go:141] libmachine: (NoKubernetes-378897) waiting for IP...
	I0120 12:21:58.111323  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.111860  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.111935  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.111858  984408 retry.go:31] will retry after 300.198876ms: waiting for domain to come up
	I0120 12:21:58.413657  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.414454  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.414481  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.414415  984408 retry.go:31] will retry after 307.778398ms: waiting for domain to come up
	I0120 12:21:58.723825  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.724432  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.724455  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.724414  984408 retry.go:31] will retry after 324.367903ms: waiting for domain to come up
	I0120 12:21:59.050179  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:59.050827  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:59.050852  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:59.050787  984408 retry.go:31] will retry after 405.111407ms: waiting for domain to come up
	I0120 12:21:59.457294  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:59.457880  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:59.457953  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:59.457863  984408 retry.go:31] will retry after 499.217202ms: waiting for domain to come up
	I0120 12:21:58.359130  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:58.362846  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363249  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:58.363277  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363484  984209 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:58.368125  984209 kubeadm.go:883] updating cluster {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:21:58.368276  984209 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:21:58.368320  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.436082  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.436112  984209 crio.go:433] Images already preloaded, skipping extraction
	I0120 12:21:58.436178  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.483739  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.483767  984209 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:21:58.483780  984209 kubeadm.go:934] updating node { 192.168.50.60 8443 v1.32.0 crio true true} ...
	I0120 12:21:58.483916  984209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-298045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:21:58.484007  984209 ssh_runner.go:195] Run: crio config
	I0120 12:21:58.538699  984209 cni.go:84] Creating CNI manager for ""
	I0120 12:21:58.538723  984209 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:58.538736  984209 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:21:58.538768  984209 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-298045 NodeName:pause-298045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:21:58.538956  984209 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-298045"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.60"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:21:58.539039  984209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:21:58.551841  984209 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:21:58.551919  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:21:58.563410  984209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0120 12:21:58.582875  984209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:21:58.602853  984209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0120 12:21:58.627254  984209 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0120 12:21:58.632549  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:58.798793  984209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:21:58.825671  984209 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045 for IP: 192.168.50.60
	I0120 12:21:58.825704  984209 certs.go:194] generating shared ca certs ...
	I0120 12:21:58.825728  984209 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:58.825932  984209 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:21:58.826004  984209 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:21:58.826021  984209 certs.go:256] generating profile certs ...
	I0120 12:21:58.826158  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/client.key
	I0120 12:21:58.826251  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key.7d49e320
	I0120 12:21:58.826318  984209 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key
	I0120 12:21:58.826474  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:21:58.826547  984209 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:21:58.826566  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:21:58.826602  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:21:58.826637  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:21:58.826675  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:21:58.826736  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:58.827584  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:21:58.909041  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:21:58.948606  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:21:59.012282  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:21:59.109695  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 12:21:59.238343  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:21:59.299284  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:21:59.334101  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:21:59.374934  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:21:59.414484  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:21:59.444169  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:21:59.467884  984209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:21:59.490444  984209 ssh_runner.go:195] Run: openssl version
	I0120 12:21:59.507626  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:21:59.520255  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524911  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524971  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.531352  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:21:59.543105  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:21:59.555663  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561820  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561864  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.569243  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:21:59.581486  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:21:59.592694  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597427  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597478  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.604688  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:21:59.616268  984209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:21:59.626223  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:21:59.632273  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:21:59.637690  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:21:59.643343  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:21:59.649192  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:21:59.654933  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:21:59.660489  984209 kubeadm.go:392] StartCluster: {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:59.660630  984209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:21:59.660679  984209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:21:59.702703  984209 cri.go:89] found id: "7522111a81eb592884102e40c70c96a442aacd7ead34e623d6d75cb0047e54a1"
	I0120 12:21:59.702723  984209 cri.go:89] found id: "87ea2b78f22aa3f634c75f73e4ff59c82419e70bcabcbf38ac3cd2cff94e916e"
	I0120 12:21:59.702727  984209 cri.go:89] found id: "97c209a11074c58552c075bb6b27e8d296987fd3a0b46a09585dcfc690275572"
	I0120 12:21:59.702730  984209 cri.go:89] found id: "f400c9e5b8388764e549d978dee73e17cb00cf3f100ab6ebfd3b553e155860ba"
	I0120 12:21:59.702733  984209 cri.go:89] found id: "97779b8bb3c647064d9431c5881d2fd1d07f9924cefcb8010cf1b47b282e8191"
	I0120 12:21:59.702736  984209 cri.go:89] found id: "38e061ad131bdad6beb214a78c3d8be29d96f19162a9861a6704003a916763f8"
	I0120 12:21:59.702739  984209 cri.go:89] found id: ""
	I0120 12:21:59.702780  984209 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-298045 -n pause-298045
helpers_test.go:261: (dbg) Run:  kubectl --context pause-298045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-298045 -n pause-298045
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-298045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-298045 logs -n 25: (1.435865162s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:17 UTC | 20 Jan 25 12:17 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:18 UTC | 20 Jan 25 12:18 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-293264       | scheduled-stop-293264     | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:19 UTC |
	| start   | -p offline-crio-348074         | offline-crio-348074       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-414382    | force-systemd-env-414382  | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:20 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-438919      | minikube                  | jenkins | v1.26.0 | 20 Jan 25 12:19 UTC | 20 Jan 25 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-348074         | offline-crio-348074       | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:20 UTC |
	| start   | -p pause-298045 --memory=2048  | pause-298045              | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:21 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-414382    | force-systemd-env-414382  | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:20 UTC |
	| start   | -p kubernetes-upgrade-049625   | kubernetes-upgrade-049625 | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:20 UTC | 20 Jan 25 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-438919      | running-upgrade-438919    | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-298045                | pause-298045              | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC | 20 Jan 25 12:22 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC | 20 Jan 25 12:21 UTC |
	| start   | -p NoKubernetes-378897         | NoKubernetes-378897       | jenkins | v1.35.0 | 20 Jan 25 12:21 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:21:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:21:49.469376  984358 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:21:49.469459  984358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:49.469462  984358 out.go:358] Setting ErrFile to fd 2...
	I0120 12:21:49.469465  984358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:21:49.470097  984358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:21:49.471314  984358 out.go:352] Setting JSON to false
	I0120 12:21:49.472624  984358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18252,"bootTime":1737357457,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:21:49.472753  984358 start.go:139] virtualization: kvm guest
	I0120 12:21:49.474796  984358 out.go:177] * [NoKubernetes-378897] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:21:49.476697  984358 notify.go:220] Checking for updates...
	I0120 12:21:49.476784  984358 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:21:49.478390  984358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:21:49.480195  984358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:21:49.482104  984358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:49.483600  984358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:21:49.485086  984358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:21:49.487248  984358 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:21:49.487388  984358 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:49.487505  984358 config.go:182] Loaded profile config "running-upgrade-438919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 12:21:49.487528  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.487667  984358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:21:49.524155  984358 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:21:49.525444  984358 start.go:297] selected driver: kvm2
	I0120 12:21:49.525453  984358 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:21:49.525462  984358 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:21:49.525729  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.525781  984358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:49.525852  984358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:21:49.541486  984358 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:21:49.541538  984358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:21:49.542024  984358 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 12:21:49.542153  984358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:21:49.542173  984358 cni.go:84] Creating CNI manager for ""
	I0120 12:21:49.542233  984358 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:49.542243  984358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:21:49.542275  984358 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0120 12:21:49.542317  984358 start.go:340] cluster config:
	{Name:NoKubernetes-378897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-378897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:49.542425  984358 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:21:49.544219  984358 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-378897
	I0120 12:21:47.620592  983974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:47.641614  983974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:47.824434  983974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:48.054866  983974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:48.069565  983974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:48.097950  983974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0120 12:21:48.098007  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.107830  983974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:48.107893  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.118595  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.127838  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.137932  983974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:48.146820  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.156363  983974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.174311  983974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:48.183957  983974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:48.194783  983974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:48.208814  983974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:48.405846  983974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:50.830211  983974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.424323062s)
	I0120 12:21:50.830244  983974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:50.830304  983974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:50.834515  983974 start.go:563] Will wait 60s for crictl version
	I0120 12:21:50.834598  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:50.840022  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:50.863834  983974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0120 12:21:50.863918  983974 ssh_runner.go:195] Run: crio --version
	I0120 12:21:50.896361  983974 ssh_runner.go:195] Run: crio --version
	I0120 12:21:50.927328  983974 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0120 12:21:49.145828  983149 out.go:235]   - Booting up control plane ...
	I0120 12:21:49.145964  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:21:49.153153  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:21:49.153237  983149 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:21:49.153897  983149 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:21:49.160318  983149 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:21:47.681401  984209 machine.go:93] provisionDockerMachine start ...
	I0120 12:21:47.681424  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:47.681617  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.683910  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684586  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.684653  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.684890  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.686660  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686863  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.686972  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.687090  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.687322  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.687334  984209 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:21:47.806065  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.806100  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807072  984209 buildroot.go:166] provisioning hostname "pause-298045"
	I0120 12:21:47.807124  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:47.807393  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.811076  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811629  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.811662  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.811962  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.812161  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812323  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.812487  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.812680  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.812976  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.812999  984209 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-298045 && echo "pause-298045" | sudo tee /etc/hostname
	I0120 12:21:47.944902  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298045
	
	I0120 12:21:47.944934  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:47.948698  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949220  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:47.949288  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:47.949825  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:47.950133  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:47.950484  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:47.950715  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:47.950962  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:47.951029  984209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-298045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-298045/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-298045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:21:48.075875  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:21:48.075917  984209 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:21:48.075969  984209 buildroot.go:174] setting up certificates
	I0120 12:21:48.075979  984209 provision.go:84] configureAuth start
	I0120 12:21:48.076001  984209 main.go:141] libmachine: (pause-298045) Calling .GetMachineName
	I0120 12:21:48.076319  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:48.079748  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080268  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.080316  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.080512  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.083503  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.083939  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.083967  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.084143  984209 provision.go:143] copyHostCerts
	I0120 12:21:48.084222  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:21:48.084266  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:21:48.084336  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:21:48.084492  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:21:48.084522  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:21:48.084556  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:21:48.084635  984209 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:21:48.084654  984209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:21:48.084679  984209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:21:48.084820  984209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.pause-298045 san=[127.0.0.1 192.168.50.60 localhost minikube pause-298045]
	I0120 12:21:48.324701  984209 provision.go:177] copyRemoteCerts
	I0120 12:21:48.324775  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:21:48.324821  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.327899  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328190  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.328228  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.328525  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.328798  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.328980  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.329139  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:48.423464  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:21:48.454560  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0120 12:21:48.481363  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:21:48.506936  984209 provision.go:87] duration metric: took 430.937393ms to configureAuth
	I0120 12:21:48.506960  984209 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:21:48.507111  984209 config.go:182] Loaded profile config "pause-298045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:21:48.507174  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:48.510005  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510510  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:48.510562  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:48.510832  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:48.511011  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511167  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:48.511351  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:48.511515  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:48.511718  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:48.511743  984209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:21:50.928860  983974 main.go:141] libmachine: (running-upgrade-438919) Calling .GetIP
	I0120 12:21:50.931985  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | domain running-upgrade-438919 has defined MAC address 52:54:00:9a:d1:63 in network mk-running-upgrade-438919
	I0120 12:21:50.932361  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:d1:63", ip: ""} in network mk-running-upgrade-438919: {Iface:virbr1 ExpiryTime:2025-01-20 13:20:38 +0000 UTC Type:0 Mac:52:54:00:9a:d1:63 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:running-upgrade-438919 Clientid:01:52:54:00:9a:d1:63}
	I0120 12:21:50.932395  983974 main.go:141] libmachine: (running-upgrade-438919) DBG | domain running-upgrade-438919 has defined IP address 192.168.39.240 and MAC address 52:54:00:9a:d1:63 in network mk-running-upgrade-438919
	I0120 12:21:50.932620  983974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:50.936064  983974 kubeadm.go:883] updating cluster {Name:running-upgrade-438919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-438919 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0120 12:21:50.936164  983974 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0120 12:21:50.936204  983974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:50.968456  983974 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0120 12:21:50.968515  983974 ssh_runner.go:195] Run: which lz4
	I0120 12:21:50.971914  983974 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:21:50.975767  983974 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:21:50.975793  983974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0120 12:21:49.545467  984358 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0120 12:21:50.471634  984358 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0120 12:21:50.471784  984358 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/NoKubernetes-378897/config.json ...
	I0120 12:21:50.471814  984358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/NoKubernetes-378897/config.json: {Name:mk0f9f7eb221ce2809a1e9b33745b57b779c73cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:50.471963  984358 start.go:360] acquireMachinesLock for NoKubernetes-378897: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:21:55.431385  984358 start.go:364] duration metric: took 4.959376725s to acquireMachinesLock for "NoKubernetes-378897"
	I0120 12:21:55.431436  984358 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-378897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-378
897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:21:55.431556  984358 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:21:55.184816  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:21:55.184850  984209 machine.go:96] duration metric: took 7.503431773s to provisionDockerMachine
	I0120 12:21:55.184866  984209 start.go:293] postStartSetup for "pause-298045" (driver="kvm2")
	I0120 12:21:55.184879  984209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:21:55.184906  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.185298  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:21:55.185331  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.188337  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.188649  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.188680  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.189413  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.190871  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.191196  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.191415  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.278122  984209 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:21:55.282651  984209 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:21:55.282682  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:21:55.282769  984209 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:21:55.282853  984209 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:21:55.282942  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:21:55.292898  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:55.320838  984209 start.go:296] duration metric: took 135.954837ms for postStartSetup
	I0120 12:21:55.320887  984209 fix.go:56] duration metric: took 7.666979647s for fixHost
	I0120 12:21:55.320914  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.324372  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.324852  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.324880  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.325110  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.325319  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325526  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.325711  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.325905  984209 main.go:141] libmachine: Using SSH client type: native
	I0120 12:21:55.326130  984209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0120 12:21:55.326148  984209 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:21:55.431197  984209 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375715.391020400
	
	I0120 12:21:55.431224  984209 fix.go:216] guest clock: 1737375715.391020400
	I0120 12:21:55.431235  984209 fix.go:229] Guest: 2025-01-20 12:21:55.3910204 +0000 UTC Remote: 2025-01-20 12:21:55.320893381 +0000 UTC m=+7.867264972 (delta=70.127019ms)
	I0120 12:21:55.431263  984209 fix.go:200] guest clock delta is within tolerance: 70.127019ms
	I0120 12:21:55.431270  984209 start.go:83] releasing machines lock for "pause-298045", held for 7.777375463s
	I0120 12:21:55.431307  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.431605  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:55.434885  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435333  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.435390  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.435554  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436171  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436380  984209 main.go:141] libmachine: (pause-298045) Calling .DriverName
	I0120 12:21:55.436473  984209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:21:55.436542  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.436803  984209 ssh_runner.go:195] Run: cat /version.json
	I0120 12:21:55.436834  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHHostname
	I0120 12:21:55.439617  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.439952  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.439981  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440152  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.440212  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440338  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.440519  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.440716  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.440731  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:55.440759  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:55.440943  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHPort
	I0120 12:21:55.441108  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHKeyPath
	I0120 12:21:55.441267  984209 main.go:141] libmachine: (pause-298045) Calling .GetSSHUsername
	I0120 12:21:55.441453  984209 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/pause-298045/id_rsa Username:docker}
	I0120 12:21:55.547694  984209 ssh_runner.go:195] Run: systemctl --version
	I0120 12:21:55.555288  984209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:21:55.714394  984209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:21:55.737530  984209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:21:55.737628  984209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:21:55.754466  984209 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 12:21:55.754497  984209 start.go:495] detecting cgroup driver to use...
	I0120 12:21:55.754588  984209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:21:55.785063  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:21:55.810833  984209 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:21:55.810909  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:21:55.850702  984209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:21:55.874109  984209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:21:56.068672  984209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:21:56.234848  984209 docker.go:233] disabling docker service ...
	I0120 12:21:56.234920  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:21:56.259943  984209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:21:56.273211  984209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:21:56.504079  984209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:21:56.771090  984209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:21:56.807702  984209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:21:56.849556  984209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:21:56.849619  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.915758  984209 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:21:56.915843  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:56.961546  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.000209  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.056192  984209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:21:57.109297  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.143033  984209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.156545  984209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:21:57.207048  984209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:21:57.251961  984209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:21:57.279286  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:57.494183  984209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:21:52.878126  983974 crio.go:462] duration metric: took 1.906229198s to copy over tarball
	I0120 12:21:52.878215  983974 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:21:56.684295  983974 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.806038305s)
	I0120 12:21:56.684353  983974 crio.go:469] duration metric: took 3.806179935s to extract the tarball
	I0120 12:21:56.684366  983974 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:21:56.749618  983974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:56.788519  983974 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0120 12:21:56.788552  983974 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:21:56.788609  983974 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:56.788635  983974 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:56.788646  983974 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:56.788677  983974 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:56.788686  983974 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:56.788671  983974 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:56.788918  983974 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:56.788986  983974 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0120 12:21:56.790450  983974 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:56.790463  983974 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:56.790472  983974 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:21:56.790586  983974 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:56.790591  983974 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:56.790674  983974 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0120 12:21:56.790721  983974 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:56.790855  983974 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.009252  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.010909  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.018415  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.026186  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.026221  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.033293  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.042182  983974 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0120 12:21:57.264881  983974 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0120 12:21:57.265011  983974 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.265103  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279351  983974 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0120 12:21:57.279403  983974 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.279453  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279465  983974 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0120 12:21:57.279506  983974 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.279548  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279580  983974 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0120 12:21:57.279615  983974 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.279655  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279655  983974 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0120 12:21:57.279732  983974 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.279762  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.279782  983974 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0120 12:21:57.279813  983974 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.279842  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.279788  983974 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0120 12:21:57.279958  983974 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0120 12:21:57.279987  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.280012  983974 ssh_runner.go:195] Run: which crictl
	I0120 12:21:57.292121  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.292272  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.295427  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.343770  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.343952  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.343971  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.344147  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:21:57.462391  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 12:21:57.462725  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:57.468801  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0120 12:21:57.526867  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 12:21:57.526957  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0120 12:21:57.527064  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 12:21:57.537896  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0120 12:21:57.585501  983974 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0120 12:21:58.221635  984209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:21:58.221717  984209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:21:58.228428  984209 start.go:563] Will wait 60s for crictl version
	I0120 12:21:58.228493  984209 ssh_runner.go:195] Run: which crictl
	I0120 12:21:58.233479  984209 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:21:58.283380  984209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:21:58.283476  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.319657  984209 ssh_runner.go:195] Run: crio --version
	I0120 12:21:58.357799  984209 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:21:55.433782  984358 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0120 12:21:55.433988  984358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:21:55.434019  984358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:21:55.454707  984358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46845
	I0120 12:21:55.455315  984358 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:21:55.456004  984358 main.go:141] libmachine: Using API Version  1
	I0120 12:21:55.456067  984358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:21:55.456514  984358 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:21:55.456741  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .GetMachineName
	I0120 12:21:55.456934  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .DriverName
	I0120 12:21:55.457143  984358 start.go:159] libmachine.API.Create for "NoKubernetes-378897" (driver="kvm2")
	I0120 12:21:55.457165  984358 client.go:168] LocalClient.Create starting
	I0120 12:21:55.457201  984358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:21:55.457236  984358 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:55.457251  984358 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:55.457317  984358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:21:55.457337  984358 main.go:141] libmachine: Decoding PEM data...
	I0120 12:21:55.457348  984358 main.go:141] libmachine: Parsing certificate...
	I0120 12:21:55.457365  984358 main.go:141] libmachine: Running pre-create checks...
	I0120 12:21:55.457373  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .PreCreateCheck
	I0120 12:21:55.457790  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .GetConfigRaw
	I0120 12:21:55.458296  984358 main.go:141] libmachine: Creating machine...
	I0120 12:21:55.458307  984358 main.go:141] libmachine: (NoKubernetes-378897) Calling .Create
	I0120 12:21:55.458495  984358 main.go:141] libmachine: (NoKubernetes-378897) creating KVM machine...
	I0120 12:21:55.458509  984358 main.go:141] libmachine: (NoKubernetes-378897) creating network...
	I0120 12:21:55.459873  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | found existing default KVM network
	I0120 12:21:55.461727  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.461520  984408 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:dd:5e} reservation:<nil>}
	I0120 12:21:55.463231  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.463139  984408 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:b2:5f} reservation:<nil>}
	I0120 12:21:55.465188  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.465095  984408 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000390bf0}
	I0120 12:21:55.465254  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | created network xml: 
	I0120 12:21:55.465268  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | <network>
	I0120 12:21:55.465284  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <name>mk-NoKubernetes-378897</name>
	I0120 12:21:55.465293  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <dns enable='no'/>
	I0120 12:21:55.465300  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   
	I0120 12:21:55.465307  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 12:21:55.465315  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |     <dhcp>
	I0120 12:21:55.465323  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 12:21:55.465331  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |     </dhcp>
	I0120 12:21:55.465336  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   </ip>
	I0120 12:21:55.465343  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG |   
	I0120 12:21:55.465348  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | </network>
	I0120 12:21:55.465357  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | 
	I0120 12:21:55.470786  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | trying to create private KVM network mk-NoKubernetes-378897 192.168.61.0/24...
	I0120 12:21:55.552688  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | private KVM network mk-NoKubernetes-378897 192.168.61.0/24 created
	I0120 12:21:55.552726  984358 main.go:141] libmachine: (NoKubernetes-378897) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 ...
	I0120 12:21:55.552747  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.552663  984408 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:55.552758  984358 main.go:141] libmachine: (NoKubernetes-378897) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:21:55.552867  984358 main.go:141] libmachine: (NoKubernetes-378897) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:21:55.946067  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:55.945906  984408 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/id_rsa...
	I0120 12:21:56.210132  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:56.209947  984408 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/NoKubernetes-378897.rawdisk...
	I0120 12:21:56.210153  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | Writing magic tar header
	I0120 12:21:56.210174  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | Writing SSH key tar header
	I0120 12:21:56.210236  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:56.210176  984408 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 ...
	I0120 12:21:56.210369  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897
	I0120 12:21:56.210410  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897 (perms=drwx------)
	I0120 12:21:56.210426  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:21:56.210472  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:21:56.210503  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:21:56.210533  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:21:56.210546  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:21:56.210553  984358 main.go:141] libmachine: (NoKubernetes-378897) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:21:56.210563  984358 main.go:141] libmachine: (NoKubernetes-378897) creating domain...
	I0120 12:21:56.210575  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:21:56.210582  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:21:56.210591  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:21:56.210598  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home/jenkins
	I0120 12:21:56.210606  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | checking permissions on dir: /home
	I0120 12:21:56.210611  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | skipping /home - not owner
	I0120 12:21:56.212297  984358 main.go:141] libmachine: (NoKubernetes-378897) define libvirt domain using xml: 
	I0120 12:21:56.212321  984358 main.go:141] libmachine: (NoKubernetes-378897) <domain type='kvm'>
	I0120 12:21:56.212330  984358 main.go:141] libmachine: (NoKubernetes-378897)   <name>NoKubernetes-378897</name>
	I0120 12:21:56.212337  984358 main.go:141] libmachine: (NoKubernetes-378897)   <memory unit='MiB'>6000</memory>
	I0120 12:21:56.212354  984358 main.go:141] libmachine: (NoKubernetes-378897)   <vcpu>2</vcpu>
	I0120 12:21:56.212360  984358 main.go:141] libmachine: (NoKubernetes-378897)   <features>
	I0120 12:21:56.212371  984358 main.go:141] libmachine: (NoKubernetes-378897)     <acpi/>
	I0120 12:21:56.212375  984358 main.go:141] libmachine: (NoKubernetes-378897)     <apic/>
	I0120 12:21:56.212383  984358 main.go:141] libmachine: (NoKubernetes-378897)     <pae/>
	I0120 12:21:56.212388  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212395  984358 main.go:141] libmachine: (NoKubernetes-378897)   </features>
	I0120 12:21:56.212401  984358 main.go:141] libmachine: (NoKubernetes-378897)   <cpu mode='host-passthrough'>
	I0120 12:21:56.212408  984358 main.go:141] libmachine: (NoKubernetes-378897)   
	I0120 12:21:56.212413  984358 main.go:141] libmachine: (NoKubernetes-378897)   </cpu>
	I0120 12:21:56.212419  984358 main.go:141] libmachine: (NoKubernetes-378897)   <os>
	I0120 12:21:56.212425  984358 main.go:141] libmachine: (NoKubernetes-378897)     <type>hvm</type>
	I0120 12:21:56.212431  984358 main.go:141] libmachine: (NoKubernetes-378897)     <boot dev='cdrom'/>
	I0120 12:21:56.212436  984358 main.go:141] libmachine: (NoKubernetes-378897)     <boot dev='hd'/>
	I0120 12:21:56.212442  984358 main.go:141] libmachine: (NoKubernetes-378897)     <bootmenu enable='no'/>
	I0120 12:21:56.212446  984358 main.go:141] libmachine: (NoKubernetes-378897)   </os>
	I0120 12:21:56.212453  984358 main.go:141] libmachine: (NoKubernetes-378897)   <devices>
	I0120 12:21:56.212458  984358 main.go:141] libmachine: (NoKubernetes-378897)     <disk type='file' device='cdrom'>
	I0120 12:21:56.212470  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/boot2docker.iso'/>
	I0120 12:21:56.212477  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target dev='hdc' bus='scsi'/>
	I0120 12:21:56.212484  984358 main.go:141] libmachine: (NoKubernetes-378897)       <readonly/>
	I0120 12:21:56.212499  984358 main.go:141] libmachine: (NoKubernetes-378897)     </disk>
	I0120 12:21:56.212507  984358 main.go:141] libmachine: (NoKubernetes-378897)     <disk type='file' device='disk'>
	I0120 12:21:56.212516  984358 main.go:141] libmachine: (NoKubernetes-378897)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:21:56.212528  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/NoKubernetes-378897/NoKubernetes-378897.rawdisk'/>
	I0120 12:21:56.212536  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target dev='hda' bus='virtio'/>
	I0120 12:21:56.212542  984358 main.go:141] libmachine: (NoKubernetes-378897)     </disk>
	I0120 12:21:56.212548  984358 main.go:141] libmachine: (NoKubernetes-378897)     <interface type='network'>
	I0120 12:21:56.212556  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source network='mk-NoKubernetes-378897'/>
	I0120 12:21:56.212564  984358 main.go:141] libmachine: (NoKubernetes-378897)       <model type='virtio'/>
	I0120 12:21:56.212570  984358 main.go:141] libmachine: (NoKubernetes-378897)     </interface>
	I0120 12:21:56.212576  984358 main.go:141] libmachine: (NoKubernetes-378897)     <interface type='network'>
	I0120 12:21:56.212583  984358 main.go:141] libmachine: (NoKubernetes-378897)       <source network='default'/>
	I0120 12:21:56.212588  984358 main.go:141] libmachine: (NoKubernetes-378897)       <model type='virtio'/>
	I0120 12:21:56.212595  984358 main.go:141] libmachine: (NoKubernetes-378897)     </interface>
	I0120 12:21:56.212601  984358 main.go:141] libmachine: (NoKubernetes-378897)     <serial type='pty'>
	I0120 12:21:56.212608  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target port='0'/>
	I0120 12:21:56.212612  984358 main.go:141] libmachine: (NoKubernetes-378897)     </serial>
	I0120 12:21:56.212619  984358 main.go:141] libmachine: (NoKubernetes-378897)     <console type='pty'>
	I0120 12:21:56.212625  984358 main.go:141] libmachine: (NoKubernetes-378897)       <target type='serial' port='0'/>
	I0120 12:21:56.212631  984358 main.go:141] libmachine: (NoKubernetes-378897)     </console>
	I0120 12:21:56.212636  984358 main.go:141] libmachine: (NoKubernetes-378897)     <rng model='virtio'>
	I0120 12:21:56.212644  984358 main.go:141] libmachine: (NoKubernetes-378897)       <backend model='random'>/dev/random</backend>
	I0120 12:21:56.212651  984358 main.go:141] libmachine: (NoKubernetes-378897)     </rng>
	I0120 12:21:56.212657  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212663  984358 main.go:141] libmachine: (NoKubernetes-378897)     
	I0120 12:21:56.212669  984358 main.go:141] libmachine: (NoKubernetes-378897)   </devices>
	I0120 12:21:56.212673  984358 main.go:141] libmachine: (NoKubernetes-378897) </domain>
	I0120 12:21:56.212685  984358 main.go:141] libmachine: (NoKubernetes-378897) 
	I0120 12:21:56.304952  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:0c:76:6e in network default
	I0120 12:21:56.305781  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:56.305881  984358 main.go:141] libmachine: (NoKubernetes-378897) starting domain...
	I0120 12:21:56.305900  984358 main.go:141] libmachine: (NoKubernetes-378897) ensuring networks are active...
	I0120 12:21:56.306917  984358 main.go:141] libmachine: (NoKubernetes-378897) Ensuring network default is active
	I0120 12:21:56.307399  984358 main.go:141] libmachine: (NoKubernetes-378897) Ensuring network mk-NoKubernetes-378897 is active
	I0120 12:21:56.308166  984358 main.go:141] libmachine: (NoKubernetes-378897) getting domain XML...
	I0120 12:21:56.309071  984358 main.go:141] libmachine: (NoKubernetes-378897) creating domain...
	I0120 12:21:58.109623  984358 main.go:141] libmachine: (NoKubernetes-378897) waiting for IP...
	I0120 12:21:58.111323  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.111860  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.111935  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.111858  984408 retry.go:31] will retry after 300.198876ms: waiting for domain to come up
	I0120 12:21:58.413657  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.414454  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.414481  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.414415  984408 retry.go:31] will retry after 307.778398ms: waiting for domain to come up
	I0120 12:21:58.723825  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:58.724432  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:58.724455  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:58.724414  984408 retry.go:31] will retry after 324.367903ms: waiting for domain to come up
	I0120 12:21:59.050179  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:59.050827  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:59.050852  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:59.050787  984408 retry.go:31] will retry after 405.111407ms: waiting for domain to come up
	I0120 12:21:59.457294  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | domain NoKubernetes-378897 has defined MAC address 52:54:00:5b:e8:94 in network mk-NoKubernetes-378897
	I0120 12:21:59.457880  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | unable to find current IP address of domain NoKubernetes-378897 in network mk-NoKubernetes-378897
	I0120 12:21:59.457953  984358 main.go:141] libmachine: (NoKubernetes-378897) DBG | I0120 12:21:59.457863  984408 retry.go:31] will retry after 499.217202ms: waiting for domain to come up
	I0120 12:21:58.359130  984209 main.go:141] libmachine: (pause-298045) Calling .GetIP
	I0120 12:21:58.362846  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363249  984209 main.go:141] libmachine: (pause-298045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1c:6a", ip: ""} in network mk-pause-298045: {Iface:virbr2 ExpiryTime:2025-01-20 13:21:01 +0000 UTC Type:0 Mac:52:54:00:6a:1c:6a Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:pause-298045 Clientid:01:52:54:00:6a:1c:6a}
	I0120 12:21:58.363277  984209 main.go:141] libmachine: (pause-298045) DBG | domain pause-298045 has defined IP address 192.168.50.60 and MAC address 52:54:00:6a:1c:6a in network mk-pause-298045
	I0120 12:21:58.363484  984209 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:21:58.368125  984209 kubeadm.go:883] updating cluster {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:21:58.368276  984209 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:21:58.368320  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.436082  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.436112  984209 crio.go:433] Images already preloaded, skipping extraction
	I0120 12:21:58.436178  984209 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:21:58.483739  984209 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:21:58.483767  984209 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:21:58.483780  984209 kubeadm.go:934] updating node { 192.168.50.60 8443 v1.32.0 crio true true} ...
	I0120 12:21:58.483916  984209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-298045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:21:58.484007  984209 ssh_runner.go:195] Run: crio config
	I0120 12:21:58.538699  984209 cni.go:84] Creating CNI manager for ""
	I0120 12:21:58.538723  984209 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:21:58.538736  984209 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:21:58.538768  984209 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-298045 NodeName:pause-298045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:21:58.538956  984209 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-298045"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.60"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:21:58.539039  984209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:21:58.551841  984209 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:21:58.551919  984209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:21:58.563410  984209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0120 12:21:58.582875  984209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:21:58.602853  984209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0120 12:21:58.627254  984209 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0120 12:21:58.632549  984209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:21:58.798793  984209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:21:58.825671  984209 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045 for IP: 192.168.50.60
	I0120 12:21:58.825704  984209 certs.go:194] generating shared ca certs ...
	I0120 12:21:58.825728  984209 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:21:58.825932  984209 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:21:58.826004  984209 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:21:58.826021  984209 certs.go:256] generating profile certs ...
	I0120 12:21:58.826158  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/client.key
	I0120 12:21:58.826251  984209 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key.7d49e320
	I0120 12:21:58.826318  984209 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key
	I0120 12:21:58.826474  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:21:58.826547  984209 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:21:58.826566  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:21:58.826602  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:21:58.826637  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:21:58.826675  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:21:58.826736  984209 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:21:58.827584  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:21:58.909041  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:21:58.948606  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:21:59.012282  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:21:59.109695  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 12:21:59.238343  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:21:59.299284  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:21:59.334101  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/pause-298045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:21:59.374934  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:21:59.414484  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:21:59.444169  984209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:21:59.467884  984209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:21:59.490444  984209 ssh_runner.go:195] Run: openssl version
	I0120 12:21:59.507626  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:21:59.520255  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524911  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.524971  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:21:59.531352  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:21:59.543105  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:21:59.555663  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561820  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.561864  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:21:59.569243  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:21:59.581486  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:21:59.592694  984209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597427  984209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.597478  984209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:21:59.604688  984209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:21:59.616268  984209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:21:59.626223  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:21:59.632273  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:21:59.637690  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:21:59.643343  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:21:59.649192  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:21:59.654933  984209 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:21:59.660489  984209 kubeadm.go:392] StartCluster: {Name:pause-298045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-298045 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:21:59.660630  984209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:21:59.660679  984209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:21:59.702703  984209 cri.go:89] found id: "7522111a81eb592884102e40c70c96a442aacd7ead34e623d6d75cb0047e54a1"
	I0120 12:21:59.702723  984209 cri.go:89] found id: "87ea2b78f22aa3f634c75f73e4ff59c82419e70bcabcbf38ac3cd2cff94e916e"
	I0120 12:21:59.702727  984209 cri.go:89] found id: "97c209a11074c58552c075bb6b27e8d296987fd3a0b46a09585dcfc690275572"
	I0120 12:21:59.702730  984209 cri.go:89] found id: "f400c9e5b8388764e549d978dee73e17cb00cf3f100ab6ebfd3b553e155860ba"
	I0120 12:21:59.702733  984209 cri.go:89] found id: "97779b8bb3c647064d9431c5881d2fd1d07f9924cefcb8010cf1b47b282e8191"
	I0120 12:21:59.702736  984209 cri.go:89] found id: "38e061ad131bdad6beb214a78c3d8be29d96f19162a9861a6704003a916763f8"
	I0120 12:21:59.702739  984209 cri.go:89] found id: ""
	I0120 12:21:59.702780  984209 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-298045 -n pause-298045
helpers_test.go:261: (dbg) Run:  kubectl --context pause-298045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (37.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (270.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m30.51319818s)

                                                
                                                
-- stdout --
	* [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:24:57.012243  989425 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:24:57.012582  989425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:24:57.012595  989425 out.go:358] Setting ErrFile to fd 2...
	I0120 12:24:57.012600  989425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:24:57.012911  989425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:24:57.013735  989425 out.go:352] Setting JSON to false
	I0120 12:24:57.015148  989425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18440,"bootTime":1737357457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:24:57.015238  989425 start.go:139] virtualization: kvm guest
	I0120 12:24:57.017774  989425 out.go:177] * [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:24:57.019470  989425 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:24:57.019486  989425 notify.go:220] Checking for updates...
	I0120 12:24:57.021145  989425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:24:57.022676  989425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:24:57.024188  989425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:24:57.025677  989425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:24:57.027094  989425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:24:57.028901  989425 config.go:182] Loaded profile config "cert-expiration-673364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:24:57.029028  989425 config.go:182] Loaded profile config "cert-options-600668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:24:57.029145  989425 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:24:57.029310  989425 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:24:57.074616  989425 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:24:57.076284  989425 start.go:297] selected driver: kvm2
	I0120 12:24:57.076310  989425 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:24:57.076326  989425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:24:57.077462  989425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:57.077601  989425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:24:57.098465  989425 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:24:57.098564  989425 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:24:57.098957  989425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:24:57.099008  989425 cni.go:84] Creating CNI manager for ""
	I0120 12:24:57.099084  989425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:24:57.099097  989425 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:24:57.099188  989425 start.go:340] cluster config:
	{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:24:57.099341  989425 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:24:57.101339  989425 out.go:177] * Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	I0120 12:24:57.102741  989425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:24:57.102844  989425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:24:57.103173  989425 cache.go:56] Caching tarball of preloaded images
	I0120 12:24:57.103341  989425 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:24:57.103360  989425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:24:57.103506  989425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:24:57.103540  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json: {Name:mkec4f17d249bc59d3e2da840bcbc2f85399c1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:24:57.103845  989425 start.go:360] acquireMachinesLock for old-k8s-version-134433: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:24:57.103921  989425 start.go:364] duration metric: took 46.825µs to acquireMachinesLock for "old-k8s-version-134433"
	I0120 12:24:57.103951  989425 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:24:57.104044  989425 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:24:57.105691  989425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 12:24:57.105887  989425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:24:57.105919  989425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:24:57.125931  989425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I0120 12:24:57.126482  989425 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:24:57.127158  989425 main.go:141] libmachine: Using API Version  1
	I0120 12:24:57.127190  989425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:24:57.127655  989425 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:24:57.127927  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:24:57.128097  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:24:57.128301  989425 start.go:159] libmachine.API.Create for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:24:57.128334  989425 client.go:168] LocalClient.Create starting
	I0120 12:24:57.128376  989425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:24:57.128411  989425 main.go:141] libmachine: Decoding PEM data...
	I0120 12:24:57.128429  989425 main.go:141] libmachine: Parsing certificate...
	I0120 12:24:57.128500  989425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:24:57.128532  989425 main.go:141] libmachine: Decoding PEM data...
	I0120 12:24:57.128549  989425 main.go:141] libmachine: Parsing certificate...
	I0120 12:24:57.128571  989425 main.go:141] libmachine: Running pre-create checks...
	I0120 12:24:57.128587  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .PreCreateCheck
	I0120 12:24:57.128959  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:24:57.129440  989425 main.go:141] libmachine: Creating machine...
	I0120 12:24:57.129458  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .Create
	I0120 12:24:57.129625  989425 main.go:141] libmachine: (old-k8s-version-134433) creating KVM machine...
	I0120 12:24:57.129650  989425 main.go:141] libmachine: (old-k8s-version-134433) creating network...
	I0120 12:24:57.131069  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found existing default KVM network
	I0120 12:24:57.133280  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.133065  989449 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:65:e8:f5} reservation:<nil>}
	I0120 12:24:57.135000  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.134922  989449 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6cb0}
	I0120 12:24:57.135073  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | created network xml: 
	I0120 12:24:57.135093  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | <network>
	I0120 12:24:57.135116  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   <name>mk-old-k8s-version-134433</name>
	I0120 12:24:57.135125  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   <dns enable='no'/>
	I0120 12:24:57.135134  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   
	I0120 12:24:57.135153  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0120 12:24:57.135167  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |     <dhcp>
	I0120 12:24:57.135177  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0120 12:24:57.135188  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |     </dhcp>
	I0120 12:24:57.135192  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   </ip>
	I0120 12:24:57.135199  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG |   
	I0120 12:24:57.135204  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | </network>
	I0120 12:24:57.135217  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | 
	I0120 12:24:57.140534  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | trying to create private KVM network mk-old-k8s-version-134433 192.168.50.0/24...
	I0120 12:24:57.227122  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | private KVM network mk-old-k8s-version-134433 192.168.50.0/24 created
	I0120 12:24:57.228370  989425 main.go:141] libmachine: (old-k8s-version-134433) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433 ...
	I0120 12:24:57.228413  989425 main.go:141] libmachine: (old-k8s-version-134433) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:24:57.228430  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.227257  989449 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:24:57.228452  989425 main.go:141] libmachine: (old-k8s-version-134433) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:24:57.596590  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.596391  989449 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa...
	I0120 12:24:57.952218  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.952075  989449 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/old-k8s-version-134433.rawdisk...
	I0120 12:24:57.952252  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | Writing magic tar header
	I0120 12:24:57.952265  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | Writing SSH key tar header
	I0120 12:24:57.952274  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:57.952239  989449 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433 ...
	I0120 12:24:57.952444  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433
	I0120 12:24:57.952485  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433 (perms=drwx------)
	I0120 12:24:57.952496  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:24:57.952511  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:24:57.952521  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:24:57.952538  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:24:57.952549  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home/jenkins
	I0120 12:24:57.952560  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | checking permissions on dir: /home
	I0120 12:24:57.952571  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | skipping /home - not owner
	I0120 12:24:57.952621  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:24:57.952664  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:24:57.952718  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:24:57.952750  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:24:57.952770  989425 main.go:141] libmachine: (old-k8s-version-134433) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:24:57.952783  989425 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:24:57.953773  989425 main.go:141] libmachine: (old-k8s-version-134433) define libvirt domain using xml: 
	I0120 12:24:57.953798  989425 main.go:141] libmachine: (old-k8s-version-134433) <domain type='kvm'>
	I0120 12:24:57.953810  989425 main.go:141] libmachine: (old-k8s-version-134433)   <name>old-k8s-version-134433</name>
	I0120 12:24:57.953818  989425 main.go:141] libmachine: (old-k8s-version-134433)   <memory unit='MiB'>2200</memory>
	I0120 12:24:57.953826  989425 main.go:141] libmachine: (old-k8s-version-134433)   <vcpu>2</vcpu>
	I0120 12:24:57.953832  989425 main.go:141] libmachine: (old-k8s-version-134433)   <features>
	I0120 12:24:57.953845  989425 main.go:141] libmachine: (old-k8s-version-134433)     <acpi/>
	I0120 12:24:57.953855  989425 main.go:141] libmachine: (old-k8s-version-134433)     <apic/>
	I0120 12:24:57.953863  989425 main.go:141] libmachine: (old-k8s-version-134433)     <pae/>
	I0120 12:24:57.953874  989425 main.go:141] libmachine: (old-k8s-version-134433)     
	I0120 12:24:57.953901  989425 main.go:141] libmachine: (old-k8s-version-134433)   </features>
	I0120 12:24:57.953923  989425 main.go:141] libmachine: (old-k8s-version-134433)   <cpu mode='host-passthrough'>
	I0120 12:24:57.953962  989425 main.go:141] libmachine: (old-k8s-version-134433)   
	I0120 12:24:57.953989  989425 main.go:141] libmachine: (old-k8s-version-134433)   </cpu>
	I0120 12:24:57.954010  989425 main.go:141] libmachine: (old-k8s-version-134433)   <os>
	I0120 12:24:57.954028  989425 main.go:141] libmachine: (old-k8s-version-134433)     <type>hvm</type>
	I0120 12:24:57.954038  989425 main.go:141] libmachine: (old-k8s-version-134433)     <boot dev='cdrom'/>
	I0120 12:24:57.954049  989425 main.go:141] libmachine: (old-k8s-version-134433)     <boot dev='hd'/>
	I0120 12:24:57.954058  989425 main.go:141] libmachine: (old-k8s-version-134433)     <bootmenu enable='no'/>
	I0120 12:24:57.954067  989425 main.go:141] libmachine: (old-k8s-version-134433)   </os>
	I0120 12:24:57.954075  989425 main.go:141] libmachine: (old-k8s-version-134433)   <devices>
	I0120 12:24:57.954086  989425 main.go:141] libmachine: (old-k8s-version-134433)     <disk type='file' device='cdrom'>
	I0120 12:24:57.954098  989425 main.go:141] libmachine: (old-k8s-version-134433)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/boot2docker.iso'/>
	I0120 12:24:57.954110  989425 main.go:141] libmachine: (old-k8s-version-134433)       <target dev='hdc' bus='scsi'/>
	I0120 12:24:57.954123  989425 main.go:141] libmachine: (old-k8s-version-134433)       <readonly/>
	I0120 12:24:57.954131  989425 main.go:141] libmachine: (old-k8s-version-134433)     </disk>
	I0120 12:24:57.954143  989425 main.go:141] libmachine: (old-k8s-version-134433)     <disk type='file' device='disk'>
	I0120 12:24:57.954156  989425 main.go:141] libmachine: (old-k8s-version-134433)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:24:57.954173  989425 main.go:141] libmachine: (old-k8s-version-134433)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/old-k8s-version-134433.rawdisk'/>
	I0120 12:24:57.954189  989425 main.go:141] libmachine: (old-k8s-version-134433)       <target dev='hda' bus='virtio'/>
	I0120 12:24:57.954199  989425 main.go:141] libmachine: (old-k8s-version-134433)     </disk>
	I0120 12:24:57.954204  989425 main.go:141] libmachine: (old-k8s-version-134433)     <interface type='network'>
	I0120 12:24:57.954215  989425 main.go:141] libmachine: (old-k8s-version-134433)       <source network='mk-old-k8s-version-134433'/>
	I0120 12:24:57.954227  989425 main.go:141] libmachine: (old-k8s-version-134433)       <model type='virtio'/>
	I0120 12:24:57.954235  989425 main.go:141] libmachine: (old-k8s-version-134433)     </interface>
	I0120 12:24:57.954247  989425 main.go:141] libmachine: (old-k8s-version-134433)     <interface type='network'>
	I0120 12:24:57.954263  989425 main.go:141] libmachine: (old-k8s-version-134433)       <source network='default'/>
	I0120 12:24:57.954273  989425 main.go:141] libmachine: (old-k8s-version-134433)       <model type='virtio'/>
	I0120 12:24:57.954294  989425 main.go:141] libmachine: (old-k8s-version-134433)     </interface>
	I0120 12:24:57.954307  989425 main.go:141] libmachine: (old-k8s-version-134433)     <serial type='pty'>
	I0120 12:24:57.954314  989425 main.go:141] libmachine: (old-k8s-version-134433)       <target port='0'/>
	I0120 12:24:57.954324  989425 main.go:141] libmachine: (old-k8s-version-134433)     </serial>
	I0120 12:24:57.954332  989425 main.go:141] libmachine: (old-k8s-version-134433)     <console type='pty'>
	I0120 12:24:57.954344  989425 main.go:141] libmachine: (old-k8s-version-134433)       <target type='serial' port='0'/>
	I0120 12:24:57.954351  989425 main.go:141] libmachine: (old-k8s-version-134433)     </console>
	I0120 12:24:57.954360  989425 main.go:141] libmachine: (old-k8s-version-134433)     <rng model='virtio'>
	I0120 12:24:57.954368  989425 main.go:141] libmachine: (old-k8s-version-134433)       <backend model='random'>/dev/random</backend>
	I0120 12:24:57.954408  989425 main.go:141] libmachine: (old-k8s-version-134433)     </rng>
	I0120 12:24:57.954434  989425 main.go:141] libmachine: (old-k8s-version-134433)     
	I0120 12:24:57.954449  989425 main.go:141] libmachine: (old-k8s-version-134433)     
	I0120 12:24:57.954461  989425 main.go:141] libmachine: (old-k8s-version-134433)   </devices>
	I0120 12:24:57.954474  989425 main.go:141] libmachine: (old-k8s-version-134433) </domain>
	I0120 12:24:57.954486  989425 main.go:141] libmachine: (old-k8s-version-134433) 
	I0120 12:24:57.959191  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:6c:e1:53 in network default
	I0120 12:24:57.959846  989425 main.go:141] libmachine: (old-k8s-version-134433) starting domain...
	I0120 12:24:57.959873  989425 main.go:141] libmachine: (old-k8s-version-134433) ensuring networks are active...
	I0120 12:24:57.959886  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:24:57.960583  989425 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network default is active
	I0120 12:24:57.960931  989425 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network mk-old-k8s-version-134433 is active
	I0120 12:24:57.961466  989425 main.go:141] libmachine: (old-k8s-version-134433) getting domain XML...
	I0120 12:24:57.962310  989425 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:24:59.328594  989425 main.go:141] libmachine: (old-k8s-version-134433) waiting for IP...
	I0120 12:24:59.329363  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:24:59.329830  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:24:59.329896  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:59.329835  989449 retry.go:31] will retry after 297.389487ms: waiting for domain to come up
	I0120 12:24:59.629540  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:24:59.630105  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:24:59.630133  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:59.630069  989449 retry.go:31] will retry after 269.329201ms: waiting for domain to come up
	I0120 12:24:59.900708  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:24:59.901281  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:24:59.901340  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:24:59.901269  989449 retry.go:31] will retry after 355.572884ms: waiting for domain to come up
	I0120 12:25:00.258854  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:00.259449  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:00.259478  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:00.259371  989449 retry.go:31] will retry after 372.609615ms: waiting for domain to come up
	I0120 12:25:00.634273  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:00.635021  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:00.635051  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:00.634997  989449 retry.go:31] will retry after 628.145422ms: waiting for domain to come up
	I0120 12:25:01.264366  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:01.264906  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:01.264943  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:01.264880  989449 retry.go:31] will retry after 876.333086ms: waiting for domain to come up
	I0120 12:25:02.143189  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:02.143741  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:02.143797  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:02.143702  989449 retry.go:31] will retry after 983.835016ms: waiting for domain to come up
	I0120 12:25:03.128908  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:03.129534  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:03.129559  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:03.129514  989449 retry.go:31] will retry after 1.006804684s: waiting for domain to come up
	I0120 12:25:04.138356  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:04.138990  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:04.139027  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:04.138913  989449 retry.go:31] will retry after 1.698132486s: waiting for domain to come up
	I0120 12:25:05.839702  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:05.840361  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:05.840398  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:05.840276  989449 retry.go:31] will retry after 1.773615131s: waiting for domain to come up
	I0120 12:25:07.616220  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:07.616837  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:07.616896  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:07.616796  989449 retry.go:31] will retry after 2.381680903s: waiting for domain to come up
	I0120 12:25:09.999759  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:10.000299  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:10.000328  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:10.000256  989449 retry.go:31] will retry after 2.624643786s: waiting for domain to come up
	I0120 12:25:12.626821  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:12.627385  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:12.627419  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:12.627343  989449 retry.go:31] will retry after 4.467337721s: waiting for domain to come up
	I0120 12:25:17.099751  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:17.100340  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:25:17.100369  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:25:17.100287  989449 retry.go:31] will retry after 5.029432808s: waiting for domain to come up
	I0120 12:25:22.132296  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.132911  989425 main.go:141] libmachine: (old-k8s-version-134433) found domain IP: 192.168.50.250
	I0120 12:25:22.132954  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has current primary IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.132976  989425 main.go:141] libmachine: (old-k8s-version-134433) reserving static IP address...
	I0120 12:25:22.133282  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"} in network mk-old-k8s-version-134433
	I0120 12:25:22.210565  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | Getting to WaitForSSH function...
	I0120 12:25:22.210601  989425 main.go:141] libmachine: (old-k8s-version-134433) reserved static IP address 192.168.50.250 for domain old-k8s-version-134433
	I0120 12:25:22.210614  989425 main.go:141] libmachine: (old-k8s-version-134433) waiting for SSH...
	I0120 12:25:22.213385  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.213776  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.213800  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.213976  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH client type: external
	I0120 12:25:22.213999  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa (-rw-------)
	I0120 12:25:22.214032  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:25:22.214047  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | About to run SSH command:
	I0120 12:25:22.214080  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | exit 0
	I0120 12:25:22.338311  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | SSH cmd err, output: <nil>: 
	I0120 12:25:22.338558  989425 main.go:141] libmachine: (old-k8s-version-134433) KVM machine creation complete
	I0120 12:25:22.338935  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:25:22.339619  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:22.339885  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:22.340055  989425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:25:22.340071  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetState
	I0120 12:25:22.341512  989425 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:25:22.341526  989425 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:25:22.341531  989425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:25:22.341536  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:22.343991  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.344346  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.344382  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.344533  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:22.344775  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.344965  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.345165  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:22.345342  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:22.345575  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:22.345588  989425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:25:22.453472  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:25:22.453498  989425 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:25:22.453509  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:22.456468  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.456832  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.456859  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.456982  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:22.457183  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.457341  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.457524  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:22.457692  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:22.457908  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:22.457925  989425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:25:22.562611  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:25:22.562681  989425 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:25:22.562689  989425 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:25:22.562697  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:25:22.562972  989425 buildroot.go:166] provisioning hostname "old-k8s-version-134433"
	I0120 12:25:22.563002  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:25:22.563201  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:22.565851  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.566229  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.566257  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.566415  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:22.566607  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.566766  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.566900  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:22.567097  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:22.567264  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:22.567276  989425 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-134433 && echo "old-k8s-version-134433" | sudo tee /etc/hostname
	I0120 12:25:22.686794  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-134433
	
	I0120 12:25:22.686832  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:22.689267  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.689629  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.689657  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.689818  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:22.689982  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.690123  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:22.690241  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:22.690427  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:22.690599  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:22.690614  989425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-134433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-134433/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-134433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:25:22.801613  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:25:22.801638  989425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:25:22.801674  989425 buildroot.go:174] setting up certificates
	I0120 12:25:22.801683  989425 provision.go:84] configureAuth start
	I0120 12:25:22.801695  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:25:22.801970  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:25:22.804510  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.804802  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.804827  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.805014  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:22.807051  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.807399  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:22.807434  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:22.807568  989425 provision.go:143] copyHostCerts
	I0120 12:25:22.807621  989425 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:25:22.807643  989425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:25:22.807704  989425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:25:22.807816  989425 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:25:22.807828  989425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:25:22.807851  989425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:25:22.807911  989425 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:25:22.807919  989425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:25:22.807938  989425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:25:22.807993  989425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-134433 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-134433]
	I0120 12:25:23.051399  989425 provision.go:177] copyRemoteCerts
	I0120 12:25:23.051454  989425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:25:23.051478  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.054376  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.054742  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.054766  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.055006  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.055247  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.055424  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.055658  989425 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:25:23.140140  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:25:23.161505  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:25:23.182315  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:25:23.202948  989425 provision.go:87] duration metric: took 401.251207ms to configureAuth
	I0120 12:25:23.202973  989425 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:25:23.203138  989425 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:25:23.203219  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.206178  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.206547  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.206581  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.206839  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.207079  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.207269  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.207489  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.207678  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:23.207886  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:23.207910  989425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:25:23.425860  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:25:23.425905  989425 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:25:23.425920  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetURL
	I0120 12:25:23.427275  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | using libvirt version 6000000
	I0120 12:25:23.429272  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.429586  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.429619  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.429774  989425 main.go:141] libmachine: Docker is up and running!
	I0120 12:25:23.429796  989425 main.go:141] libmachine: Reticulating splines...
	I0120 12:25:23.429803  989425 client.go:171] duration metric: took 26.301461876s to LocalClient.Create
	I0120 12:25:23.429826  989425 start.go:167] duration metric: took 26.301528725s to libmachine.API.Create "old-k8s-version-134433"
	I0120 12:25:23.429838  989425 start.go:293] postStartSetup for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:25:23.429848  989425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:25:23.429867  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:23.430141  989425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:25:23.430178  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.432453  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.432755  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.432787  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.432920  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.433069  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.433226  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.433371  989425 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:25:23.516273  989425 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:25:23.520244  989425 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:25:23.520265  989425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:25:23.520323  989425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:25:23.520400  989425 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:25:23.520482  989425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:25:23.528957  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:25:23.551611  989425 start.go:296] duration metric: took 121.762942ms for postStartSetup
	I0120 12:25:23.551663  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:25:23.552228  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:25:23.554800  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.555249  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.555281  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.555581  989425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:25:23.555783  989425 start.go:128] duration metric: took 26.451726976s to createHost
	I0120 12:25:23.555813  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.557853  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.558133  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.558164  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.558280  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.558451  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.558633  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.558769  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.558915  989425 main.go:141] libmachine: Using SSH client type: native
	I0120 12:25:23.559079  989425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:25:23.559089  989425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:25:23.667851  989425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375923.637399880
	
	I0120 12:25:23.667875  989425 fix.go:216] guest clock: 1737375923.637399880
	I0120 12:25:23.667885  989425 fix.go:229] Guest: 2025-01-20 12:25:23.63739988 +0000 UTC Remote: 2025-01-20 12:25:23.55579891 +0000 UTC m=+26.591600371 (delta=81.60097ms)
	I0120 12:25:23.667922  989425 fix.go:200] guest clock delta is within tolerance: 81.60097ms
	I0120 12:25:23.667929  989425 start.go:83] releasing machines lock for "old-k8s-version-134433", held for 26.563998178s
	I0120 12:25:23.667957  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:23.668274  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:25:23.671670  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.671993  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.672015  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.672177  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:23.672668  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:23.672876  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:25:23.672959  989425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:25:23.673009  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.673104  989425 ssh_runner.go:195] Run: cat /version.json
	I0120 12:25:23.673137  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:25:23.675740  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.676099  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.676140  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.676164  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.676387  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.676593  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:23.676606  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.676623  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:23.676814  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.676815  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:25:23.677026  989425 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:25:23.677051  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:25:23.677198  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:25:23.677366  989425 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:25:23.763734  989425 ssh_runner.go:195] Run: systemctl --version
	I0120 12:25:23.793612  989425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:25:23.953200  989425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:25:23.959021  989425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:25:23.959095  989425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:25:23.973584  989425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:25:23.973604  989425 start.go:495] detecting cgroup driver to use...
	I0120 12:25:23.973668  989425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:25:23.991439  989425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:25:24.004055  989425 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:25:24.004121  989425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:25:24.018540  989425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:25:24.031295  989425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:25:24.150758  989425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:25:24.307851  989425 docker.go:233] disabling docker service ...
	I0120 12:25:24.307934  989425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:25:24.320907  989425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:25:24.332351  989425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:25:24.440897  989425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:25:24.550485  989425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:25:24.564766  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:25:24.584663  989425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:25:24.584726  989425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:25:24.596509  989425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:25:24.596573  989425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:25:24.608176  989425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:25:24.619502  989425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:25:24.630953  989425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:25:24.642766  989425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:25:24.653833  989425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:25:24.653872  989425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:25:24.667170  989425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:25:24.676072  989425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:25:24.791117  989425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:25:24.895728  989425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:25:24.895793  989425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:25:24.900789  989425 start.go:563] Will wait 60s for crictl version
	I0120 12:25:24.900838  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:24.904431  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:25:24.946045  989425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:25:24.946138  989425 ssh_runner.go:195] Run: crio --version
	I0120 12:25:24.972285  989425 ssh_runner.go:195] Run: crio --version
	I0120 12:25:24.998476  989425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:25:24.999716  989425 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:25:25.002983  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:25.003501  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:25:12 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:25:25.003540  989425 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:25:25.003760  989425 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:25:25.007680  989425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:25:25.019820  989425 kubeadm.go:883] updating cluster {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:25:25.019940  989425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:25:25.019984  989425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:25:25.052588  989425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:25:25.052645  989425 ssh_runner.go:195] Run: which lz4
	I0120 12:25:25.056523  989425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:25:25.060973  989425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:25:25.061006  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:25:26.528531  989425 crio.go:462] duration metric: took 1.47203598s to copy over tarball
	I0120 12:25:26.528609  989425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:25:28.999385  989425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.470733651s)
	I0120 12:25:28.999419  989425 crio.go:469] duration metric: took 2.470853982s to extract the tarball
	I0120 12:25:28.999429  989425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:25:29.042912  989425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:25:29.084820  989425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:25:29.084849  989425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:25:29.084909  989425 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:25:29.084934  989425 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.084959  989425 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.084947  989425 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.084999  989425 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.085014  989425 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.085047  989425 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:25:29.085092  989425 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.086420  989425 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:25:29.086627  989425 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.086688  989425 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:25:29.086716  989425 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.086753  989425 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.086693  989425 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.086696  989425 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.086755  989425 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.303407  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:25:29.307859  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.313389  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.314591  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.319794  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.331925  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.353611  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.397745  989425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:25:29.397812  989425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:25:29.397859  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.451685  989425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:25:29.451746  989425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.451808  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.475375  989425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:25:29.475443  989425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.475505  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.486774  989425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:25:29.486827  989425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.486862  989425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:25:29.486876  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.486900  989425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.486776  989425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:25:29.486947  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.486954  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:25:29.486955  989425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.487000  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.486780  989425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:25:29.487045  989425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.487054  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.487070  989425 ssh_runner.go:195] Run: which crictl
	I0120 12:25:29.489682  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.550515  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.550611  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.550632  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:25:29.550661  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.550713  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.550770  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.567106  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.676340  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.724817  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.724873  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.724988  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:25:29.724994  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:25:29.725057  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.728153  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:25:29.731763  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:25:29.867661  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:25:29.867662  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:25:29.872126  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:25:29.872203  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:25:29.872258  989425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:25:29.872304  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:25:29.872358  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:25:29.943040  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:25:29.943087  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:25:29.943091  989425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:25:30.264311  989425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:25:30.402469  989425 cache_images.go:92] duration metric: took 1.317599075s to LoadCachedImages
	W0120 12:25:30.402593  989425 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0120 12:25:30.402613  989425 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I0120 12:25:30.402746  989425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-134433 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:25:30.402835  989425 ssh_runner.go:195] Run: crio config
	I0120 12:25:30.451559  989425 cni.go:84] Creating CNI manager for ""
	I0120 12:25:30.451582  989425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:25:30.451594  989425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:25:30.451620  989425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-134433 NodeName:old-k8s-version-134433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:25:30.451766  989425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-134433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:25:30.451865  989425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:25:30.461678  989425 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:25:30.461758  989425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:25:30.471627  989425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 12:25:30.487227  989425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:25:30.503247  989425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:25:30.518623  989425 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0120 12:25:30.522304  989425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:25:30.534332  989425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:25:30.657947  989425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:25:30.674996  989425 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433 for IP: 192.168.50.250
	I0120 12:25:30.675024  989425 certs.go:194] generating shared ca certs ...
	I0120 12:25:30.675049  989425 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:30.675266  989425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:25:30.675332  989425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:25:30.675346  989425 certs.go:256] generating profile certs ...
	I0120 12:25:30.675424  989425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key
	I0120 12:25:30.675457  989425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt with IP's: []
	I0120 12:25:31.094629  989425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt ...
	I0120 12:25:31.094664  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt: {Name:mkf80c28b98931ac93f9fa1ac55653146433ed40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.094863  989425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key ...
	I0120 12:25:31.094885  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key: {Name:mk3167171cda17b253ef75e32e7333563dfc097a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.095007  989425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93
	I0120 12:25:31.095029  989425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt.6d656c93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.250]
	I0120 12:25:31.258824  989425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt.6d656c93 ...
	I0120 12:25:31.258853  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt.6d656c93: {Name:mk34553942ac2293e300722f44928e7ed4448abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.259032  989425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93 ...
	I0120 12:25:31.259051  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93: {Name:mk7d809a9161f49c2563940dcb7b85fe616c3ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.259153  989425 certs.go:381] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt.6d656c93 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt
	I0120 12:25:31.259230  989425 certs.go:385] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93 -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key
	I0120 12:25:31.259282  989425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key
	I0120 12:25:31.259298  989425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt with IP's: []
	I0120 12:25:31.399130  989425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt ...
	I0120 12:25:31.399162  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt: {Name:mkaeb6f4a25baf8144019273f88f78656fa0b628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.399348  989425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key ...
	I0120 12:25:31.399367  989425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key: {Name:mk54ae09ec6b743b281fd01a58d0bf264a595e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:25:31.399573  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:25:31.399615  989425 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:25:31.399625  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:25:31.399648  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:25:31.399669  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:25:31.399690  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:25:31.399732  989425 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:25:31.400297  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:25:31.431955  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:25:31.454724  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:25:31.479233  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:25:31.514558  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:25:31.549166  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:25:31.593875  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:25:31.620401  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:25:31.645241  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:25:31.667476  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:25:31.690503  989425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:25:31.714182  989425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:25:31.732442  989425 ssh_runner.go:195] Run: openssl version
	I0120 12:25:31.739674  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:25:31.752455  989425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:31.757118  989425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:31.757182  989425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:25:31.762976  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:25:31.773229  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:25:31.783705  989425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:25:31.787945  989425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:25:31.787993  989425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:25:31.793705  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:25:31.803820  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:25:31.814595  989425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:25:31.819359  989425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:25:31.819415  989425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:25:31.825603  989425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:25:31.836673  989425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:25:31.841372  989425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:25:31.841459  989425 kubeadm.go:392] StartCluster: {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:25:31.841556  989425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:25:31.841607  989425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:25:31.886587  989425 cri.go:89] found id: ""
	I0120 12:25:31.886666  989425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:25:31.897849  989425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:25:31.908287  989425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:25:31.918432  989425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:25:31.918449  989425 kubeadm.go:157] found existing configuration files:
	
	I0120 12:25:31.918492  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:25:31.928101  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:25:31.928148  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:25:31.938238  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:25:31.948052  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:25:31.948130  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:25:31.958212  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:25:31.967885  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:25:31.967937  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:25:31.977110  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:25:31.985742  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:25:31.985793  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:25:31.995228  989425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:25:32.105886  989425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:25:32.105970  989425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:25:32.252581  989425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:25:32.252766  989425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:25:32.252944  989425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:25:32.484481  989425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:25:32.585750  989425 out.go:235]   - Generating certificates and keys ...
	I0120 12:25:32.585892  989425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:25:32.586038  989425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:25:32.651778  989425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:25:32.964982  989425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:25:33.121114  989425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:25:33.271260  989425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:25:33.328325  989425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:25:33.328501  989425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	I0120 12:25:33.506051  989425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:25:33.506306  989425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	I0120 12:25:33.721695  989425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:25:33.827671  989425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:25:34.086992  989425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:25:34.090455  989425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:25:34.216737  989425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:25:34.448898  989425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:25:34.510125  989425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:25:34.670325  989425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:25:34.688581  989425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:25:34.689834  989425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:25:34.689907  989425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:25:34.799443  989425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:25:34.802293  989425 out.go:235]   - Booting up control plane ...
	I0120 12:25:34.802441  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:25:34.813632  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:25:34.814670  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:25:34.815570  989425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:25:34.819560  989425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:26:14.809365  989425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:26:14.810588  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:26:14.810870  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:26:19.810922  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:26:19.811242  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:26:29.810810  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:26:29.811165  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:26:49.811626  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:26:49.811941  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:27:29.812560  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:27:29.812837  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:27:29.812856  989425 kubeadm.go:310] 
	I0120 12:27:29.812916  989425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:27:29.812952  989425 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:27:29.812963  989425 kubeadm.go:310] 
	I0120 12:27:29.813039  989425 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:27:29.813087  989425 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:27:29.813197  989425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:27:29.813212  989425 kubeadm.go:310] 
	I0120 12:27:29.813346  989425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:27:29.813395  989425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:27:29.813430  989425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:27:29.813440  989425 kubeadm.go:310] 
	I0120 12:27:29.813575  989425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:27:29.813701  989425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:27:29.813717  989425 kubeadm.go:310] 
	I0120 12:27:29.813868  989425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:27:29.813994  989425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:27:29.814108  989425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:27:29.814173  989425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:27:29.814191  989425 kubeadm.go:310] 
	I0120 12:27:29.814838  989425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:27:29.814964  989425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:27:29.815022  989425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:27:29.815151  989425 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-134433] and IPs [192.168.50.250 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:27:29.815193  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:27:30.265890  989425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:27:30.281740  989425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:27:30.291250  989425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:27:30.291276  989425 kubeadm.go:157] found existing configuration files:
	
	I0120 12:27:30.291326  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:27:30.300223  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:27:30.300281  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:27:30.309346  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:27:30.318977  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:27:30.319037  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:27:30.328354  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:27:30.337202  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:27:30.337237  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:27:30.346297  989425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:27:30.356017  989425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:27:30.356074  989425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:27:30.365115  989425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:27:30.608597  989425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:29:26.833573  989425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:29:26.833743  989425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:29:26.835308  989425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:29:26.835407  989425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:29:26.835527  989425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:29:26.835659  989425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:29:26.835762  989425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:29:26.835845  989425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:29:26.837716  989425 out.go:235]   - Generating certificates and keys ...
	I0120 12:29:26.837817  989425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:29:26.837899  989425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:29:26.838016  989425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:29:26.838112  989425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:29:26.838191  989425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:29:26.838265  989425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:29:26.838323  989425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:29:26.838382  989425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:29:26.838447  989425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:29:26.838513  989425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:29:26.838574  989425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:29:26.838662  989425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:29:26.838752  989425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:29:26.838851  989425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:29:26.838951  989425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:29:26.839036  989425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:29:26.839158  989425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:29:26.839278  989425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:29:26.839375  989425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:29:26.839471  989425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:29:26.841642  989425 out.go:235]   - Booting up control plane ...
	I0120 12:29:26.841727  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:29:26.841810  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:29:26.841885  989425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:29:26.841967  989425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:29:26.842115  989425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:29:26.842163  989425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:29:26.842255  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:29:26.842515  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:29:26.842635  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:29:26.842891  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:29:26.842993  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:29:26.843228  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:29:26.843320  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:29:26.843518  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:29:26.843576  989425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:29:26.843725  989425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:29:26.843740  989425 kubeadm.go:310] 
	I0120 12:29:26.843790  989425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:29:26.843850  989425 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:29:26.843863  989425 kubeadm.go:310] 
	I0120 12:29:26.843911  989425 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:29:26.843976  989425 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:29:26.844115  989425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:29:26.844125  989425 kubeadm.go:310] 
	I0120 12:29:26.844252  989425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:29:26.844306  989425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:29:26.844345  989425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:29:26.844355  989425 kubeadm.go:310] 
	I0120 12:29:26.844508  989425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:29:26.844611  989425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:29:26.844621  989425 kubeadm.go:310] 
	I0120 12:29:26.844779  989425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:29:26.844885  989425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:29:26.844988  989425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:29:26.845088  989425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:29:26.845124  989425 kubeadm.go:310] 
	I0120 12:29:26.845188  989425 kubeadm.go:394] duration metric: took 3m55.003736536s to StartCluster
	I0120 12:29:26.845250  989425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:29:26.845327  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:29:26.887064  989425 cri.go:89] found id: ""
	I0120 12:29:26.887107  989425 logs.go:282] 0 containers: []
	W0120 12:29:26.887121  989425 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:29:26.887135  989425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:29:26.887223  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:29:26.927300  989425 cri.go:89] found id: ""
	I0120 12:29:26.927334  989425 logs.go:282] 0 containers: []
	W0120 12:29:26.927353  989425 logs.go:284] No container was found matching "etcd"
	I0120 12:29:26.927363  989425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:29:26.927437  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:29:26.959893  989425 cri.go:89] found id: ""
	I0120 12:29:26.959924  989425 logs.go:282] 0 containers: []
	W0120 12:29:26.959934  989425 logs.go:284] No container was found matching "coredns"
	I0120 12:29:26.959941  989425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:29:26.960004  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:29:26.995151  989425 cri.go:89] found id: ""
	I0120 12:29:26.995185  989425 logs.go:282] 0 containers: []
	W0120 12:29:26.995196  989425 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:29:26.995205  989425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:29:26.995277  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:29:27.051881  989425 cri.go:89] found id: ""
	I0120 12:29:27.051920  989425 logs.go:282] 0 containers: []
	W0120 12:29:27.051932  989425 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:29:27.051940  989425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:29:27.052018  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:29:27.089793  989425 cri.go:89] found id: ""
	I0120 12:29:27.089828  989425 logs.go:282] 0 containers: []
	W0120 12:29:27.089838  989425 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:29:27.089846  989425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:29:27.089914  989425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:29:27.123338  989425 cri.go:89] found id: ""
	I0120 12:29:27.123371  989425 logs.go:282] 0 containers: []
	W0120 12:29:27.123383  989425 logs.go:284] No container was found matching "kindnet"
	I0120 12:29:27.123398  989425 logs.go:123] Gathering logs for kubelet ...
	I0120 12:29:27.123418  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:29:27.171032  989425 logs.go:123] Gathering logs for dmesg ...
	I0120 12:29:27.171065  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:29:27.183618  989425 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:29:27.183649  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:29:27.306453  989425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:29:27.306485  989425 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:29:27.306504  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:29:27.407062  989425 logs.go:123] Gathering logs for container status ...
	I0120 12:29:27.407104  989425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0120 12:29:27.455571  989425 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:29:27.455650  989425 out.go:270] * 
	* 
	W0120 12:29:27.455737  989425 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:29:27.455761  989425 out.go:270] * 
	* 
	W0120 12:29:27.457088  989425 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:29:27.460700  989425 out.go:201] 
	W0120 12:29:27.461752  989425 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:29:27.461809  989425 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:29:27.461841  989425 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:29:27.463902  989425 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 6 (249.038377ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:29:27.763679  992762 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-134433" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (270.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1620.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-496524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (26m58.530243103s)

                                                
                                                
-- stdout --
	* [no-preload-496524] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-496524" primary control-plane node in "no-preload-496524" cluster
	* Restarting existing kvm2 VM for "no-preload-496524" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-496524 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:28:15.867372  992109 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:28:15.867470  992109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:28:15.867478  992109 out.go:358] Setting ErrFile to fd 2...
	I0120 12:28:15.867483  992109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:28:15.867650  992109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:28:15.868190  992109 out.go:352] Setting JSON to false
	I0120 12:28:15.869204  992109 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18639,"bootTime":1737357457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:28:15.869308  992109 start.go:139] virtualization: kvm guest
	I0120 12:28:15.871447  992109 out.go:177] * [no-preload-496524] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:28:15.872717  992109 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:28:15.872737  992109 notify.go:220] Checking for updates...
	I0120 12:28:15.875016  992109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:28:15.876319  992109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:28:15.877515  992109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:28:15.878573  992109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:28:15.879669  992109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:28:15.881045  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:28:15.881427  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:28:15.881476  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:15.896700  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0120 12:28:15.897235  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:15.897889  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:28:15.897915  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:15.898350  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:15.898583  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:15.898920  992109 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:28:15.899360  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:28:15.899432  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:15.914247  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0120 12:28:15.914620  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:15.915189  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:28:15.915216  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:15.915524  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:15.915747  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:15.953238  992109 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:28:15.954396  992109 start.go:297] selected driver: kvm2
	I0120 12:28:15.954408  992109 start.go:901] validating driver "kvm2" against &{Name:no-preload-496524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-496524 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:28:15.954571  992109 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:28:15.955285  992109 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.955371  992109 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:28:15.969913  992109 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:28:15.970294  992109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:28:15.970328  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:28:15.970396  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:28:15.970468  992109 start.go:340] cluster config:
	{Name:no-preload-496524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-496524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:28:15.970610  992109 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.972201  992109 out.go:177] * Starting "no-preload-496524" primary control-plane node in "no-preload-496524" cluster
	I0120 12:28:15.973363  992109 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:28:15.973504  992109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/config.json ...
	I0120 12:28:15.973638  992109 cache.go:107] acquiring lock: {Name:mk229feff18638f3077bc6521c8bc52f8edfb764 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973656  992109 cache.go:107] acquiring lock: {Name:mk99801f19176fe341db1aeebe31c9983178b601 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973656  992109 cache.go:107] acquiring lock: {Name:mkd8274f3c224b4e088abe6e13c5816c93a848a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973745  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I0120 12:28:15.973765  992109 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0" took 124.95µs
	I0120 12:28:15.973769  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0120 12:28:15.973777  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I0120 12:28:15.973784  992109 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I0120 12:28:15.973789  992109 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0" took 138.801µs
	I0120 12:28:15.973789  992109 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 163.077µs
	I0120 12:28:15.973787  992109 start.go:360] acquireMachinesLock for no-preload-496524: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:28:15.973801  992109 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0120 12:28:15.973798  992109 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I0120 12:28:15.973788  992109 cache.go:107] acquiring lock: {Name:mkb842461a1dc6235d4482c31176b518bf26a57d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973814  992109 cache.go:107] acquiring lock: {Name:mkb15fd237bf68ce7d072cef1ad73e04c63697f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973850  992109 start.go:364] duration metric: took 41.939µs to acquireMachinesLock for "no-preload-496524"
	I0120 12:28:15.973857  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0120 12:28:15.973863  992109 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 52.455µs
	I0120 12:28:15.973868  992109 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0120 12:28:15.973802  992109 cache.go:107] acquiring lock: {Name:mk97ff637eefb015dd07d408c6dd2adfbe76c7ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973870  992109 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:28:15.973880  992109 fix.go:54] fixHost starting: 
	I0120 12:28:15.973881  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I0120 12:28:15.973848  992109 cache.go:107] acquiring lock: {Name:mk7f267c88724d53b57b6767540f4399bd36eb5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973893  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I0120 12:28:15.973891  992109 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0" took 179.627µs
	I0120 12:28:15.973869  992109 cache.go:107] acquiring lock: {Name:mk6991502f4a3c2b1e4799bfbb27086bd58b6cf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:28:15.973902  992109 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0" took 103.033µs
	I0120 12:28:15.973910  992109 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I0120 12:28:15.973970  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0120 12:28:15.973902  992109 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I0120 12:28:15.973984  992109 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 176.678µs
	I0120 12:28:15.974004  992109 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0120 12:28:15.974014  992109 cache.go:115] /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0120 12:28:15.974031  992109 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 207.99µs
	I0120 12:28:15.974045  992109 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0120 12:28:15.974053  992109 cache.go:87] Successfully saved all images to host disk.
	I0120 12:28:15.974260  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:28:15.974319  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:28:15.988034  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0120 12:28:15.988433  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:28:15.988874  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:28:15.988896  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:28:15.989255  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:28:15.989447  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:15.989607  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:28:15.991190  992109 fix.go:112] recreateIfNeeded on no-preload-496524: state=Stopped err=<nil>
	I0120 12:28:15.991233  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	W0120 12:28:15.991417  992109 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:28:15.994216  992109 out.go:177] * Restarting existing kvm2 VM for "no-preload-496524" ...
	I0120 12:28:15.995311  992109 main.go:141] libmachine: (no-preload-496524) Calling .Start
	I0120 12:28:15.995506  992109 main.go:141] libmachine: (no-preload-496524) starting domain...
	I0120 12:28:15.995532  992109 main.go:141] libmachine: (no-preload-496524) ensuring networks are active...
	I0120 12:28:15.996182  992109 main.go:141] libmachine: (no-preload-496524) Ensuring network default is active
	I0120 12:28:15.996617  992109 main.go:141] libmachine: (no-preload-496524) Ensuring network mk-no-preload-496524 is active
	I0120 12:28:15.997128  992109 main.go:141] libmachine: (no-preload-496524) getting domain XML...
	I0120 12:28:15.997989  992109 main.go:141] libmachine: (no-preload-496524) creating domain...
	I0120 12:28:17.227962  992109 main.go:141] libmachine: (no-preload-496524) waiting for IP...
	I0120 12:28:17.228945  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:17.229466  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:17.229554  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:17.229448  992145 retry.go:31] will retry after 273.056486ms: waiting for domain to come up
	I0120 12:28:17.503930  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:17.504496  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:17.504530  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:17.504443  992145 retry.go:31] will retry after 388.830054ms: waiting for domain to come up
	I0120 12:28:17.895280  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:17.895780  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:17.895816  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:17.895736  992145 retry.go:31] will retry after 402.447139ms: waiting for domain to come up
	I0120 12:28:18.299397  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:18.300058  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:18.300082  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:18.300020  992145 retry.go:31] will retry after 546.444181ms: waiting for domain to come up
	I0120 12:28:18.847550  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:18.848140  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:18.848160  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:18.848125  992145 retry.go:31] will retry after 705.950506ms: waiting for domain to come up
	I0120 12:28:19.556179  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:19.556675  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:19.556708  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:19.556641  992145 retry.go:31] will retry after 596.685215ms: waiting for domain to come up
	I0120 12:28:20.155389  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:20.155991  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:20.156015  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:20.155947  992145 retry.go:31] will retry after 940.58098ms: waiting for domain to come up
	I0120 12:28:21.098386  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:21.098832  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:21.098870  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:21.098793  992145 retry.go:31] will retry after 1.326606397s: waiting for domain to come up
	I0120 12:28:22.427239  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:22.427665  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:22.427691  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:22.427628  992145 retry.go:31] will retry after 1.314224724s: waiting for domain to come up
	I0120 12:28:23.743573  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:23.744046  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:23.744081  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:23.743996  992145 retry.go:31] will retry after 1.857228225s: waiting for domain to come up
	I0120 12:28:25.603605  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:25.604156  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:25.604185  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:25.604108  992145 retry.go:31] will retry after 1.993192052s: waiting for domain to come up
	I0120 12:28:27.599519  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:27.600041  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:27.600065  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:27.599999  992145 retry.go:31] will retry after 2.957378348s: waiting for domain to come up
	I0120 12:28:30.558579  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:30.559081  992109 main.go:141] libmachine: (no-preload-496524) DBG | unable to find current IP address of domain no-preload-496524 in network mk-no-preload-496524
	I0120 12:28:30.559117  992109 main.go:141] libmachine: (no-preload-496524) DBG | I0120 12:28:30.559044  992145 retry.go:31] will retry after 3.868162267s: waiting for domain to come up
	I0120 12:28:34.431278  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.431781  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has current primary IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.431801  992109 main.go:141] libmachine: (no-preload-496524) found domain IP: 192.168.61.107
	I0120 12:28:34.431810  992109 main.go:141] libmachine: (no-preload-496524) reserving static IP address...
	I0120 12:28:34.432231  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "no-preload-496524", mac: "52:54:00:13:8f:cb", ip: "192.168.61.107"} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.432258  992109 main.go:141] libmachine: (no-preload-496524) reserved static IP address 192.168.61.107 for domain no-preload-496524
	I0120 12:28:34.432275  992109 main.go:141] libmachine: (no-preload-496524) DBG | skip adding static IP to network mk-no-preload-496524 - found existing host DHCP lease matching {name: "no-preload-496524", mac: "52:54:00:13:8f:cb", ip: "192.168.61.107"}
	I0120 12:28:34.432289  992109 main.go:141] libmachine: (no-preload-496524) DBG | Getting to WaitForSSH function...
	I0120 12:28:34.432300  992109 main.go:141] libmachine: (no-preload-496524) waiting for SSH...
	I0120 12:28:34.434418  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.434768  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.434800  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.434890  992109 main.go:141] libmachine: (no-preload-496524) DBG | Using SSH client type: external
	I0120 12:28:34.434912  992109 main.go:141] libmachine: (no-preload-496524) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa (-rw-------)
	I0120 12:28:34.434942  992109 main.go:141] libmachine: (no-preload-496524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:28:34.434954  992109 main.go:141] libmachine: (no-preload-496524) DBG | About to run SSH command:
	I0120 12:28:34.434964  992109 main.go:141] libmachine: (no-preload-496524) DBG | exit 0
	I0120 12:28:34.554041  992109 main.go:141] libmachine: (no-preload-496524) DBG | SSH cmd err, output: <nil>: 
	I0120 12:28:34.554389  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetConfigRaw
	I0120 12:28:34.555003  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetIP
	I0120 12:28:34.557237  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.557551  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.557575  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.557838  992109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/config.json ...
	I0120 12:28:34.558028  992109 machine.go:93] provisionDockerMachine start ...
	I0120 12:28:34.558051  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:34.558272  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:34.560822  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.561168  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.561193  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.561313  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:34.561488  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.561636  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.561740  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:34.561902  992109 main.go:141] libmachine: Using SSH client type: native
	I0120 12:28:34.562164  992109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0120 12:28:34.562184  992109 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:28:34.658234  992109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:28:34.658267  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetMachineName
	I0120 12:28:34.658486  992109 buildroot.go:166] provisioning hostname "no-preload-496524"
	I0120 12:28:34.658508  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetMachineName
	I0120 12:28:34.658707  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:34.661396  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.661807  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.661837  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.662042  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:34.662253  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.662423  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.662626  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:34.662775  992109 main.go:141] libmachine: Using SSH client type: native
	I0120 12:28:34.662931  992109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0120 12:28:34.662945  992109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-496524 && echo "no-preload-496524" | sudo tee /etc/hostname
	I0120 12:28:34.770511  992109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-496524
	
	I0120 12:28:34.770558  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:34.773040  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.773359  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.773388  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.773534  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:34.773712  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.773886  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:34.774053  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:34.774230  992109 main.go:141] libmachine: Using SSH client type: native
	I0120 12:28:34.774416  992109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0120 12:28:34.774438  992109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-496524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-496524/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-496524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:28:34.878092  992109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:28:34.878146  992109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:28:34.878198  992109 buildroot.go:174] setting up certificates
	I0120 12:28:34.878217  992109 provision.go:84] configureAuth start
	I0120 12:28:34.878235  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetMachineName
	I0120 12:28:34.878567  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetIP
	I0120 12:28:34.880844  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.881168  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.881195  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.881321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:34.883380  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.883655  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:34.883689  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:34.883828  992109 provision.go:143] copyHostCerts
	I0120 12:28:34.883887  992109 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:28:34.883911  992109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:28:34.883992  992109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:28:34.884099  992109 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:28:34.884111  992109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:28:34.884146  992109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:28:34.884221  992109 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:28:34.884231  992109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:28:34.884261  992109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:28:34.884326  992109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.no-preload-496524 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-496524]
	I0120 12:28:35.130484  992109 provision.go:177] copyRemoteCerts
	I0120 12:28:35.130563  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:28:35.130594  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.133147  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.133548  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.133582  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.133763  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.133947  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.134133  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.134284  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:28:35.211377  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:28:35.233258  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 12:28:35.256866  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:28:35.281835  992109 provision.go:87] duration metric: took 403.599736ms to configureAuth
	I0120 12:28:35.281861  992109 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:28:35.282029  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:28:35.282116  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.284729  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.285101  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.285144  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.285285  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.285505  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.285683  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.285805  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.286024  992109 main.go:141] libmachine: Using SSH client type: native
	I0120 12:28:35.286199  992109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0120 12:28:35.286214  992109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:28:35.493644  992109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:28:35.493680  992109 machine.go:96] duration metric: took 935.637762ms to provisionDockerMachine
	I0120 12:28:35.493696  992109 start.go:293] postStartSetup for "no-preload-496524" (driver="kvm2")
	I0120 12:28:35.493711  992109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:28:35.493741  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:35.494122  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:28:35.494163  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.496895  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.497279  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.497317  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.497477  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.497631  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.497730  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.497855  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:28:35.575377  992109 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:28:35.579116  992109 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:28:35.579142  992109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:28:35.579239  992109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:28:35.579334  992109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:28:35.579458  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:28:35.587698  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:28:35.610005  992109 start.go:296] duration metric: took 116.29583ms for postStartSetup
	I0120 12:28:35.610041  992109 fix.go:56] duration metric: took 19.636161334s for fixHost
	I0120 12:28:35.610060  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.612755  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.613094  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.613130  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.613251  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.613450  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.613615  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.613740  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.613889  992109 main.go:141] libmachine: Using SSH client type: native
	I0120 12:28:35.614042  992109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0120 12:28:35.614052  992109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:28:35.710366  992109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376115.678331750
	
	I0120 12:28:35.710394  992109 fix.go:216] guest clock: 1737376115.678331750
	I0120 12:28:35.710404  992109 fix.go:229] Guest: 2025-01-20 12:28:35.67833175 +0000 UTC Remote: 2025-01-20 12:28:35.610045187 +0000 UTC m=+19.780407018 (delta=68.286563ms)
	I0120 12:28:35.710447  992109 fix.go:200] guest clock delta is within tolerance: 68.286563ms
	I0120 12:28:35.710456  992109 start.go:83] releasing machines lock for "no-preload-496524", held for 19.736595491s
	I0120 12:28:35.710483  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:35.710737  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetIP
	I0120 12:28:35.713089  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.713516  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.713546  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.713720  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:35.714256  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:35.714475  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:28:35.714552  992109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:28:35.714604  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.714698  992109 ssh_runner.go:195] Run: cat /version.json
	I0120 12:28:35.714718  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:28:35.717226  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.717566  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.717598  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.717618  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.717727  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.717926  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.718024  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:35.718066  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:35.718082  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.718179  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:28:35.718255  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:28:35.718321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:28:35.718422  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:28:35.718571  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:28:35.791991  992109 ssh_runner.go:195] Run: systemctl --version
	I0120 12:28:35.819568  992109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:28:35.967934  992109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:28:35.975494  992109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:28:35.975576  992109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:28:35.992579  992109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:28:35.992608  992109 start.go:495] detecting cgroup driver to use...
	I0120 12:28:35.992682  992109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:28:36.011639  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:28:36.024785  992109 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:28:36.024836  992109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:28:36.036680  992109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:28:36.048725  992109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:28:36.150882  992109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:28:36.274182  992109 docker.go:233] disabling docker service ...
	I0120 12:28:36.274256  992109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:28:36.287040  992109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:28:36.299305  992109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:28:36.422131  992109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:28:36.535314  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:28:36.547511  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:28:36.563966  992109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:28:36.564078  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.574366  992109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:28:36.574433  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.584640  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.594205  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.603882  992109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:28:36.613534  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.622745  992109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.638566  992109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:28:36.647989  992109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:28:36.656382  992109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:28:36.656427  992109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:28:36.668949  992109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:28:36.677473  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:28:36.788322  992109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:28:36.877190  992109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:28:36.877280  992109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:28:36.881610  992109 start.go:563] Will wait 60s for crictl version
	I0120 12:28:36.881660  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:36.885234  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:28:36.926482  992109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:28:36.926590  992109 ssh_runner.go:195] Run: crio --version
	I0120 12:28:36.956782  992109 ssh_runner.go:195] Run: crio --version
	I0120 12:28:36.994444  992109 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:28:36.995935  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetIP
	I0120 12:28:36.998868  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:36.999321  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:28:36.999359  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:28:36.999623  992109 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 12:28:37.003662  992109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:28:37.016535  992109 kubeadm.go:883] updating cluster {Name:no-preload-496524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-496524 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:28:37.016644  992109 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:28:37.016681  992109 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:28:37.053956  992109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:28:37.053985  992109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:28:37.054036  992109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:37.054059  992109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.054091  992109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.054101  992109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0120 12:28:37.054144  992109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.054169  992109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.054145  992109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.054070  992109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.055521  992109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0120 12:28:37.055529  992109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.055522  992109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.055591  992109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.055623  992109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:37.055869  992109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.055898  992109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.056067  992109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.276760  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.298245  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.301126  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.305656  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.307491  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.316702  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.327867  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0120 12:28:37.390942  992109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I0120 12:28:37.391026  992109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.391100  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.474976  992109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I0120 12:28:37.475035  992109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.475096  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.489128  992109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I0120 12:28:37.489183  992109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.489236  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.494150  992109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0120 12:28:37.494191  992109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.494240  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.494261  992109 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0120 12:28:37.494280  992109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I0120 12:28:37.494293  992109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.494302  992109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.494334  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.494337  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:37.600245  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.600264  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.600281  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.600299  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.600354  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.600359  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.729738  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.731230  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.731478  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.736383  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.736470  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.736619  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.825552  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 12:28:37.844304  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 12:28:37.850187  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 12:28:37.868677  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 12:28:37.868735  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 12:28:37.868772  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 12:28:37.992544  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I0120 12:28:37.992657  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 12:28:37.992712  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I0120 12:28:37.992806  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 12:28:37.998432  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I0120 12:28:37.998505  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 12:28:38.008620  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I0120 12:28:38.008669  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0120 12:28:38.008708  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 12:28:38.008739  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0120 12:28:38.008743  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0120 12:28:38.008866  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0120 12:28:38.010076  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I0120 12:28:38.010094  992109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 12:28:38.010143  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 12:28:38.010344  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I0120 12:28:38.011840  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I0120 12:28:38.015064  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I0120 12:28:38.015121  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0120 12:28:38.018387  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0120 12:28:38.276639  992109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:40.509048  992109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.232362534s)
	I0120 12:28:40.509100  992109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0120 12:28:40.509115  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.498946126s)
	I0120 12:28:40.509141  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I0120 12:28:40.509139  992109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:40.509170  992109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 12:28:40.509209  992109 ssh_runner.go:195] Run: which crictl
	I0120 12:28:40.509216  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 12:28:42.671379  992109 ssh_runner.go:235] Completed: which crictl: (2.162139116s)
	I0120 12:28:42.671449  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (2.162212769s)
	I0120 12:28:42.671472  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I0120 12:28:42.671457  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:42.671490  992109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 12:28:42.671531  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 12:28:44.328018  992109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.656503047s)
	I0120 12:28:44.328124  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:44.328133  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.65657451s)
	I0120 12:28:44.328158  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I0120 12:28:44.328193  992109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 12:28:44.328244  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 12:28:45.881706  992109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.553548334s)
	I0120 12:28:45.881829  992109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:28:45.881858  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (1.553583795s)
	I0120 12:28:45.881885  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I0120 12:28:45.881913  992109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0120 12:28:45.881955  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0120 12:28:45.917160  992109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0120 12:28:45.917275  992109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0120 12:28:49.381761  992109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.46445725s)
	I0120 12:28:49.381812  992109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0120 12:28:49.381939  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.499957227s)
	I0120 12:28:49.381966  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0120 12:28:49.381994  992109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0120 12:28:49.382044  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0120 12:28:51.241933  992109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.85985262s)
	I0120 12:28:51.241974  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0120 12:28:51.242005  992109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0120 12:28:51.242069  992109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0120 12:28:52.089282  992109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0120 12:28:52.089353  992109 cache_images.go:123] Successfully loaded all cached images
	I0120 12:28:52.089364  992109 cache_images.go:92] duration metric: took 15.035364314s to LoadCachedImages
	I0120 12:28:52.089381  992109 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.32.0 crio true true} ...
	I0120 12:28:52.089556  992109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-496524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-496524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:28:52.089650  992109 ssh_runner.go:195] Run: crio config
	I0120 12:28:52.136937  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:28:52.136959  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:28:52.136968  992109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:28:52.136992  992109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-496524 NodeName:no-preload-496524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:28:52.137133  992109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-496524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.107"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:28:52.137206  992109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:28:52.146095  992109 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:28:52.146166  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:28:52.154431  992109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0120 12:28:52.168993  992109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:28:52.183526  992109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0120 12:28:52.197990  992109 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0120 12:28:52.201275  992109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:28:52.211803  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:28:52.335731  992109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:28:52.350249  992109 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524 for IP: 192.168.61.107
	I0120 12:28:52.350274  992109 certs.go:194] generating shared ca certs ...
	I0120 12:28:52.350306  992109 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:28:52.350513  992109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:28:52.350606  992109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:28:52.350621  992109 certs.go:256] generating profile certs ...
	I0120 12:28:52.350716  992109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.key
	I0120 12:28:52.350796  992109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/apiserver.key.e326b7de
	I0120 12:28:52.350853  992109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/proxy-client.key
	I0120 12:28:52.350994  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:28:52.351037  992109 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:28:52.351051  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:28:52.351083  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:28:52.351116  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:28:52.351148  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:28:52.351207  992109 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:28:52.351820  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:28:52.400588  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:28:52.434029  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:28:52.467559  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:28:52.503727  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 12:28:52.529328  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:28:52.550581  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:28:52.571379  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:28:52.592866  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:28:52.613904  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:28:52.634325  992109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:28:52.655732  992109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:28:52.670954  992109 ssh_runner.go:195] Run: openssl version
	I0120 12:28:52.675989  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:28:52.685550  992109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:28:52.689916  992109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:28:52.689965  992109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:28:52.695426  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:28:52.705554  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:28:52.715598  992109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:28:52.719626  992109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:28:52.719662  992109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:28:52.724689  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:28:52.734582  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:28:52.746917  992109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:28:52.751019  992109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:28:52.751065  992109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:28:52.756231  992109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:28:52.766146  992109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:28:52.770337  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:28:52.775822  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:28:52.780927  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:28:52.785873  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:28:52.791030  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:28:52.796103  992109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:28:52.801209  992109 kubeadm.go:392] StartCluster: {Name:no-preload-496524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-496524 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:28:52.801316  992109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:28:52.801363  992109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:28:52.835405  992109 cri.go:89] found id: ""
	I0120 12:28:52.835468  992109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:28:52.843918  992109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:28:52.843938  992109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:28:52.843979  992109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:28:52.852312  992109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:28:52.853133  992109 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-496524" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:28:52.853611  992109 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-496524" cluster setting kubeconfig missing "no-preload-496524" context setting]
	I0120 12:28:52.854895  992109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:28:52.857414  992109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:28:52.865664  992109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0120 12:28:52.865696  992109 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:28:52.865712  992109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:28:52.865756  992109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:28:52.897738  992109 cri.go:89] found id: ""
	I0120 12:28:52.897797  992109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:28:52.914091  992109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:28:52.922362  992109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:28:52.922378  992109 kubeadm.go:157] found existing configuration files:
	
	I0120 12:28:52.922408  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:28:52.930317  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:28:52.930368  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:28:52.938481  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:28:52.946303  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:28:52.946360  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:28:52.954471  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:28:52.962209  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:28:52.962243  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:28:52.970173  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:28:52.977978  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:28:52.978027  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:28:52.986038  992109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:28:52.994218  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:53.103288  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:54.168874  992109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065551897s)
	I0120 12:28:54.168917  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:54.360537  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:54.417950  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:54.479647  992109 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:28:54.479755  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:28:54.979849  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:28:55.480656  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:28:55.494185  992109 api_server.go:72] duration metric: took 1.014539381s to wait for apiserver process to appear ...
	I0120 12:28:55.494217  992109 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:28:55.494243  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:28:57.725010  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:28:57.725044  992109 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:28:57.725059  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:28:57.769255  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:28:57.769282  992109 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:28:57.994708  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:28:57.999757  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:28:57.999789  992109 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:28:58.494425  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:28:58.500154  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:28:58.500179  992109 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:28:58.994606  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:28:59.001984  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:28:59.014248  992109 api_server.go:141] control plane version: v1.32.0
	I0120 12:28:59.014279  992109 api_server.go:131] duration metric: took 3.520055116s to wait for apiserver health ...
	I0120 12:28:59.014289  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:28:59.014295  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:28:59.015952  992109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:28:59.017240  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:28:59.040073  992109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:28:59.070737  992109 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:28:59.086573  992109 system_pods.go:59] 8 kube-system pods found
	I0120 12:28:59.086615  992109 system_pods.go:61] "coredns-668d6bf9bc-nrl8n" [8a924671-ef5f-4efb-be07-58824ff7e7f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:28:59.086625  992109 system_pods.go:61] "etcd-no-preload-496524" [51f31b28-82e0-46d2-8f45-07078da530f3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:28:59.086638  992109 system_pods.go:61] "kube-apiserver-no-preload-496524" [37958fd0-c411-475d-a095-2733098d47fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:28:59.086649  992109 system_pods.go:61] "kube-controller-manager-no-preload-496524" [c0046a1c-0a48-497b-a4f4-c53bc93d4cab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:28:59.086658  992109 system_pods.go:61] "kube-proxy-h7lgg" [d97db720-de91-45f1-a949-a81addecd5b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 12:28:59.086664  992109 system_pods.go:61] "kube-scheduler-no-preload-496524" [670dc471-ba5e-4c30-ad95-96fca84b5297] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:28:59.086674  992109 system_pods.go:61] "metrics-server-f79f97bbb-4zkcz" [8fb7eb09-91cf-40c9-b8e9-bd5ec6a93f92] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:28:59.086681  992109 system_pods.go:61] "storage-provisioner" [a882a790-0ba0-4cef-87cf-5ee521ea4c45] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:28:59.086688  992109 system_pods.go:74] duration metric: took 15.929506ms to wait for pod list to return data ...
	I0120 12:28:59.086700  992109 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:28:59.093810  992109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:28:59.093839  992109 node_conditions.go:123] node cpu capacity is 2
	I0120 12:28:59.093854  992109 node_conditions.go:105] duration metric: took 7.145398ms to run NodePressure ...
	I0120 12:28:59.093872  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:28:59.410913  992109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:28:59.416468  992109 kubeadm.go:739] kubelet initialised
	I0120 12:28:59.416489  992109 kubeadm.go:740] duration metric: took 5.550351ms waiting for restarted kubelet to initialise ...
	I0120 12:28:59.416499  992109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:28:59.428797  992109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:01.434728  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:03.435013  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:05.435126  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:07.934846  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:09.935383  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:11.934899  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:11.934931  992109 pod_ready.go:82] duration metric: took 12.506110729s for pod "coredns-668d6bf9bc-nrl8n" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.934945  992109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.938771  992109 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:11.938796  992109 pod_ready.go:82] duration metric: took 3.842592ms for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.938809  992109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.942630  992109 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:11.942652  992109 pod_ready.go:82] duration metric: took 3.834221ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.942662  992109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.946666  992109 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:11.946690  992109 pod_ready.go:82] duration metric: took 4.021015ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.946702  992109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h7lgg" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.951184  992109 pod_ready.go:93] pod "kube-proxy-h7lgg" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:11.951203  992109 pod_ready.go:82] duration metric: took 4.493781ms for pod "kube-proxy-h7lgg" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:11.951210  992109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:13.960985  992109 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:13.961008  992109 pod_ready.go:82] duration metric: took 2.009792045s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:13.961018  992109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:15.966767  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:18.466274  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:20.467094  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:22.467666  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:24.967065  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:26.969799  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:29.468150  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:31.966955  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:34.466803  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:36.468899  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:38.469582  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:40.471026  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:42.967519  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:45.001789  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:47.469101  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:49.967679  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:51.967987  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:53.968089  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:56.467565  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:58.966084  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:00.967257  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:02.969180  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:05.466677  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:07.467072  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:09.469639  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:11.966770  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:13.968440  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:16.467498  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:18.467553  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:20.967828  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:23.467604  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:25.967037  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:28.467406  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:30.468462  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:32.968514  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:35.467354  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:37.730175  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:39.969619  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:42.467837  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:44.468536  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:46.966837  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:48.967277  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:50.967473  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:52.968090  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:55.467106  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:57.967373  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:00.468079  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:02.967470  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:05.466233  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:07.466651  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:09.467586  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:11.967640  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:13.968387  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:16.467025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:18.966945  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:20.969393  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.466563  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.468388  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:27.967731  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.467076  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.467705  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.470006  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:36.967824  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.968146  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.468125  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.966550  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.967037  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.967799  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.468120  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.968580  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:55.466922  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.967658  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.968521  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.466874  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.467851  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.468061  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.966912  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:11.467184  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.966687  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.968298  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:18.466913  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.967285  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.967592  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:25.467420  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.467860  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.967353  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.967618  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.468025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.967096  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:39.467542  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:41.966891  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:44.467792  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.967382  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.971509  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.468237  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.967177  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.467036  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.468431  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.469379  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.967537  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:05.467661  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.469260  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.967169  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.968039  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.962039  992109 pod_ready.go:82] duration metric: took 4m0.001004044s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" ...
	E0120 12:33:13.962067  992109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:33:13.962099  992109 pod_ready.go:39] duration metric: took 4m14.545589853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:13.962140  992109 kubeadm.go:597] duration metric: took 4m21.118193658s to restartPrimaryControlPlane
	W0120 12:33:13.962239  992109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:33:13.962281  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:33:41.582218  992109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.61991226s)
	I0120 12:33:41.582297  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:33:41.597367  992109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:33:41.606890  992109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:33:41.615799  992109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:33:41.615823  992109 kubeadm.go:157] found existing configuration files:
	
	I0120 12:33:41.615890  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:33:41.624548  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:33:41.624613  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:33:41.634296  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:33:41.645019  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:33:41.645069  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:33:41.653988  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.662620  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:33:41.662661  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.671164  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:33:41.679068  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:33:41.679121  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:33:41.687730  992109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:33:41.842158  992109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:33:49.627545  992109 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:33:49.627631  992109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:33:49.627743  992109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:33:49.627898  992109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:33:49.628021  992109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:33:49.628110  992109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:33:49.629521  992109 out.go:235]   - Generating certificates and keys ...
	I0120 12:33:49.629586  992109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:33:49.629652  992109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:33:49.629732  992109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:33:49.629811  992109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:33:49.629945  992109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:33:49.630101  992109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:33:49.630179  992109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:33:49.630255  992109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:33:49.630331  992109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:33:49.630426  992109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:33:49.630491  992109 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:33:49.630586  992109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:33:49.630669  992109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:33:49.630752  992109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:33:49.630819  992109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:33:49.630898  992109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:33:49.630946  992109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:33:49.631065  992109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:33:49.631148  992109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:33:49.632352  992109 out.go:235]   - Booting up control plane ...
	I0120 12:33:49.632439  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:33:49.632500  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:33:49.632581  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:33:49.632734  992109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:33:49.632818  992109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:33:49.632854  992109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:33:49.632972  992109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:33:49.633093  992109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:33:49.633183  992109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.459324ms
	I0120 12:33:49.633288  992109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:33:49.633376  992109 kubeadm.go:310] [api-check] The API server is healthy after 5.002077681s
	I0120 12:33:49.633495  992109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:33:49.633603  992109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:33:49.633652  992109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:33:49.633813  992109 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-496524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:33:49.633900  992109 kubeadm.go:310] [bootstrap-token] Using token: sww9nb.rwz5issf9tlw104y
	I0120 12:33:49.635315  992109 out.go:235]   - Configuring RBAC rules ...
	I0120 12:33:49.635441  992109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:33:49.635546  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:33:49.635673  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:33:49.635790  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:33:49.635890  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:33:49.635965  992109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:33:49.636063  992109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:33:49.636105  992109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:33:49.636151  992109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:33:49.636157  992109 kubeadm.go:310] 
	I0120 12:33:49.636247  992109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:33:49.636272  992109 kubeadm.go:310] 
	I0120 12:33:49.636388  992109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:33:49.636400  992109 kubeadm.go:310] 
	I0120 12:33:49.636441  992109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:33:49.636523  992109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:33:49.636598  992109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:33:49.636608  992109 kubeadm.go:310] 
	I0120 12:33:49.636714  992109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:33:49.636738  992109 kubeadm.go:310] 
	I0120 12:33:49.636800  992109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:33:49.636810  992109 kubeadm.go:310] 
	I0120 12:33:49.636874  992109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:33:49.636984  992109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:33:49.637071  992109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:33:49.637082  992109 kubeadm.go:310] 
	I0120 12:33:49.637206  992109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:33:49.637348  992109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:33:49.637365  992109 kubeadm.go:310] 
	I0120 12:33:49.637484  992109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.637627  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:33:49.637685  992109 kubeadm.go:310] 	--control-plane 
	I0120 12:33:49.637704  992109 kubeadm.go:310] 
	I0120 12:33:49.637810  992109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:33:49.637826  992109 kubeadm.go:310] 
	I0120 12:33:49.637934  992109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.638086  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:33:49.638103  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:33:49.638112  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:33:49.639791  992109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:33:49.641114  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:33:49.651726  992109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:33:49.670543  992109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:33:49.670636  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.670688  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-496524 minikube.k8s.io/updated_at=2025_01_20T12_33_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-496524 minikube.k8s.io/primary=true
	I0120 12:33:49.704840  992109 ops.go:34] apiserver oom_adj: -16
	I0120 12:33:49.859209  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.359791  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.859509  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:51.359718  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:51.859742  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.359728  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.859803  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.359731  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.859729  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.963052  992109 kubeadm.go:1113] duration metric: took 4.292471944s to wait for elevateKubeSystemPrivileges
	I0120 12:33:53.963109  992109 kubeadm.go:394] duration metric: took 5m1.161906665s to StartCluster
	I0120 12:33:53.963139  992109 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.963257  992109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:33:53.964929  992109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.965243  992109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:33:53.965321  992109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:33:53.965437  992109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-496524"
	I0120 12:33:53.965452  992109 addons.go:69] Setting dashboard=true in profile "no-preload-496524"
	I0120 12:33:53.965477  992109 addons.go:238] Setting addon storage-provisioner=true in "no-preload-496524"
	W0120 12:33:53.965487  992109 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:33:53.965490  992109 addons.go:238] Setting addon dashboard=true in "no-preload-496524"
	I0120 12:33:53.965481  992109 addons.go:69] Setting default-storageclass=true in profile "no-preload-496524"
	W0120 12:33:53.965502  992109 addons.go:247] addon dashboard should already be in state true
	I0120 12:33:53.965518  992109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-496524"
	I0120 12:33:53.965520  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:33:53.965528  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965534  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965514  992109 addons.go:69] Setting metrics-server=true in profile "no-preload-496524"
	I0120 12:33:53.965570  992109 addons.go:238] Setting addon metrics-server=true in "no-preload-496524"
	W0120 12:33:53.965584  992109 addons.go:247] addon metrics-server should already be in state true
	I0120 12:33:53.965628  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965928  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965934  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965947  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965963  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.965985  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966029  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.966054  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966065  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966567  992109 out.go:177] * Verifying Kubernetes components...
	I0120 12:33:53.967881  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:33:53.983553  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0120 12:33:53.984079  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.984654  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.984681  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.985111  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.985353  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:53.986475  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 12:33:53.986716  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0120 12:33:53.987021  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987492  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987571  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.987588  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.987741  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0120 12:33:53.987942  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.988075  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.988425  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988440  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988577  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.988627  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.988783  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988797  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988855  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989000  992109 addons.go:238] Setting addon default-storageclass=true in "no-preload-496524"
	W0120 12:33:53.989019  992109 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:33:53.989052  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.989187  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989393  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989420  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989431  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989455  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989672  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989711  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.005609  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0120 12:33:54.006182  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.006760  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.006786  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.007131  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0120 12:33:54.007443  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.008065  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:54.008108  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.008308  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0120 12:33:54.008359  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.008993  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.009021  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.009407  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.009597  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.011591  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.013572  992109 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:33:54.014814  992109 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:33:54.015103  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.015538  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.015562  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.015921  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:33:54.015946  992109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:33:54.015970  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.015997  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.016619  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.018868  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.019948  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020370  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.020397  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020522  992109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:33:54.020716  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.020885  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.020989  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.021095  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.021561  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:33:54.021576  992109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:33:54.021592  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.024577  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.024641  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024669  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.024695  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024723  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.024878  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.025140  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.032584  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0120 12:33:54.032936  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.033474  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.033497  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.033809  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.034011  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.035349  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.035539  992109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.035557  992109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:33:54.035573  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.037812  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038056  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.038080  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038193  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.038321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.038429  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.038547  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.041727  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0120 12:33:54.042162  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.042671  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.042694  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.043048  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.043263  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.044523  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.046748  992109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:33:54.048049  992109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.048070  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:33:54.048087  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.050560  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051116  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.051143  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051300  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.051493  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.051649  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.051769  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.174035  992109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:33:54.197637  992109 node_ready.go:35] waiting up to 6m0s for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210713  992109 node_ready.go:49] node "no-preload-496524" has status "Ready":"True"
	I0120 12:33:54.210742  992109 node_ready.go:38] duration metric: took 13.074849ms for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210757  992109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:54.218615  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:54.300046  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:33:54.300080  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:33:54.351225  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.353768  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:33:54.353789  992109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:33:54.368467  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:33:54.368496  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:33:54.371467  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.389639  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:33:54.389660  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:33:54.401448  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.401467  992109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:33:54.465233  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.465824  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:33:54.465853  992109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:33:54.543139  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:33:54.543178  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:33:54.687210  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:33:54.687234  992109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:33:54.744978  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:33:54.745012  992109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:33:54.771298  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:33:54.771332  992109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:33:54.852878  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:33:54.852914  992109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:33:54.886329  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:54.886362  992109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:33:54.964102  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:55.906127  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.534613086s)
	I0120 12:33:55.906207  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906212  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.554946671s)
	I0120 12:33:55.906270  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.440998293s)
	I0120 12:33:55.906220  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906307  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906338  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906275  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906404  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906812  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.906854  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906855  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906862  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906874  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906877  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906883  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906886  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906893  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907039  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907058  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.907081  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.907090  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907187  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.907189  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907213  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908759  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.908766  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.908783  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908801  992109 addons.go:479] Verifying addon metrics-server=true in "no-preload-496524"
	I0120 12:33:55.909118  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.909137  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.939415  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.939434  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.939756  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.939772  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.225171  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.900293  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.936108167s)
	I0120 12:33:56.900402  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900428  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.900904  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.900913  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:56.900924  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.900945  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900952  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.901226  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.901246  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.902642  992109 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-496524 addons enable metrics-server
	
	I0120 12:33:56.904289  992109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0120 12:33:56.905477  992109 addons.go:514] duration metric: took 2.940174389s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0120 12:33:57.224557  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.224585  992109 pod_ready.go:82] duration metric: took 3.005934718s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.224599  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.228981  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.228999  992109 pod_ready.go:82] duration metric: took 4.392102ms for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.229007  992109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:59.239998  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:01.734840  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:03.790112  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:04.235638  992109 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.235671  992109 pod_ready.go:82] duration metric: took 7.006654161s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.235686  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240203  992109 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.240233  992109 pod_ready.go:82] duration metric: took 4.537744ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240248  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244405  992109 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.244431  992109 pod_ready.go:82] duration metric: took 4.172774ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244445  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248277  992109 pod_ready.go:93] pod "kube-proxy-dpn56" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.248303  992109 pod_ready.go:82] duration metric: took 3.849341ms for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248315  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.251995  992109 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.252016  992109 pod_ready.go:82] duration metric: took 3.69304ms for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.252025  992109 pod_ready.go:39] duration metric: took 10.041253574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:04.252040  992109 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:04.252101  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.288797  992109 api_server.go:72] duration metric: took 10.323505838s to wait for apiserver process to appear ...
	I0120 12:34:04.288829  992109 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:04.288878  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:34:04.297424  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:34:04.299152  992109 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:04.299176  992109 api_server.go:131] duration metric: took 10.340981ms to wait for apiserver health ...
	I0120 12:34:04.299188  992109 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:04.437151  992109 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:04.437187  992109 system_pods.go:61] "coredns-668d6bf9bc-8pf2c" [9402090c-afdc-4fd7-a673-155ca87b9afe] Running
	I0120 12:34:04.437194  992109 system_pods.go:61] "coredns-668d6bf9bc-rdj6t" [f7882da6-0b57-402a-a902-6c4e6a8c6cd1] Running
	I0120 12:34:04.437200  992109 system_pods.go:61] "etcd-no-preload-496524" [430610d7-4491-4d35-93d6-71738b1cad0f] Running
	I0120 12:34:04.437205  992109 system_pods.go:61] "kube-apiserver-no-preload-496524" [d028d3c0-5ee8-46cc-b8e5-95f7d07e43ca] Running
	I0120 12:34:04.437210  992109 system_pods.go:61] "kube-controller-manager-no-preload-496524" [b11b36da-c5a3-4fc6-8619-4f12fda64f63] Running
	I0120 12:34:04.437215  992109 system_pods.go:61] "kube-proxy-dpn56" [dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4] Running
	I0120 12:34:04.437219  992109 system_pods.go:61] "kube-scheduler-no-preload-496524" [80058f6c-526c-487f-82a5-74df5f2e0174] Running
	I0120 12:34:04.437227  992109 system_pods.go:61] "metrics-server-f79f97bbb-dbx78" [c8fb707c-75c2-42b6-802e-52a09222f9ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:04.437234  992109 system_pods.go:61] "storage-provisioner" [14187f8e-01fd-45ac-a749-82ba272b727f] Running
	I0120 12:34:04.437246  992109 system_pods.go:74] duration metric: took 138.05086ms to wait for pod list to return data ...
	I0120 12:34:04.437257  992109 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:04.636609  992109 default_sa.go:45] found service account: "default"
	I0120 12:34:04.636747  992109 default_sa.go:55] duration metric: took 199.476374ms for default service account to be created ...
	I0120 12:34:04.636770  992109 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:04.836002  992109 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-496524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496524 -n no-preload-496524
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-496524 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-496524 logs -n 25: (1.428852806s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-496524             | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-969801 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | disable-driver-mounts-969801                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:28 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-987349            | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-496524                  | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981597  | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:30 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-987349                 | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC | 20 Jan 25 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-134433        | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981597       | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC | 20 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC |                     |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-134433             | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:54 UTC | 20 Jan 25 12:55 UTC |
	| start   | -p newest-cni-476001 --memory=2200 --alsologtostderr   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:55:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:55:00.373544  999146 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:55:00.373799  999146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:55:00.373808  999146 out.go:358] Setting ErrFile to fd 2...
	I0120 12:55:00.373812  999146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:55:00.374008  999146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:55:00.374685  999146 out.go:352] Setting JSON to false
	I0120 12:55:00.376003  999146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20243,"bootTime":1737357457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:55:00.376084  999146 start.go:139] virtualization: kvm guest
	I0120 12:55:00.378397  999146 out.go:177] * [newest-cni-476001] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:55:00.379823  999146 notify.go:220] Checking for updates...
	I0120 12:55:00.379871  999146 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:55:00.381143  999146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:55:00.382245  999146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:55:00.383584  999146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:55:00.384871  999146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:55:00.386592  999146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:55:00.388330  999146 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:55:00.388466  999146 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:55:00.388578  999146 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:55:00.388806  999146 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:55:00.427947  999146 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:55:00.429172  999146 start.go:297] selected driver: kvm2
	I0120 12:55:00.429202  999146 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:55:00.429230  999146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:55:00.430158  999146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:55:00.430288  999146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:55:00.447414  999146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:55:00.447497  999146 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0120 12:55:00.447599  999146 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0120 12:55:00.447847  999146 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 12:55:00.447888  999146 cni.go:84] Creating CNI manager for ""
	I0120 12:55:00.447960  999146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:55:00.447974  999146 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:55:00.448045  999146 start.go:340] cluster config:
	{Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:55:00.448237  999146 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:55:00.449924  999146 out.go:177] * Starting "newest-cni-476001" primary control-plane node in "newest-cni-476001" cluster
	I0120 12:55:00.451086  999146 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:55:00.451144  999146 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:55:00.451165  999146 cache.go:56] Caching tarball of preloaded images
	I0120 12:55:00.451267  999146 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:55:00.451281  999146 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:55:00.451386  999146 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/config.json ...
	I0120 12:55:00.451411  999146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/config.json: {Name:mkc1370ff151c3f58139324c165b4018300efaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:55:00.451575  999146 start.go:360] acquireMachinesLock for newest-cni-476001: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:55:00.451620  999146 start.go:364] duration metric: took 28.885µs to acquireMachinesLock for "newest-cni-476001"
	I0120 12:55:00.451646  999146 start.go:93] Provisioning new machine with config: &{Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:55:00.451723  999146 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:55:00.454030  999146 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 12:55:00.454188  999146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:55:00.454248  999146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:55:00.470577  999146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0120 12:55:00.471038  999146 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:55:00.471586  999146 main.go:141] libmachine: Using API Version  1
	I0120 12:55:00.471615  999146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:55:00.472000  999146 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:55:00.472306  999146 main.go:141] libmachine: (newest-cni-476001) Calling .GetMachineName
	I0120 12:55:00.472518  999146 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:55:00.472701  999146 start.go:159] libmachine.API.Create for "newest-cni-476001" (driver="kvm2")
	I0120 12:55:00.472755  999146 client.go:168] LocalClient.Create starting
	I0120 12:55:00.472843  999146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:55:00.472887  999146 main.go:141] libmachine: Decoding PEM data...
	I0120 12:55:00.472904  999146 main.go:141] libmachine: Parsing certificate...
	I0120 12:55:00.472970  999146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:55:00.472994  999146 main.go:141] libmachine: Decoding PEM data...
	I0120 12:55:00.473003  999146 main.go:141] libmachine: Parsing certificate...
	I0120 12:55:00.473016  999146 main.go:141] libmachine: Running pre-create checks...
	I0120 12:55:00.473026  999146 main.go:141] libmachine: (newest-cni-476001) Calling .PreCreateCheck
	I0120 12:55:00.473400  999146 main.go:141] libmachine: (newest-cni-476001) Calling .GetConfigRaw
	I0120 12:55:00.473930  999146 main.go:141] libmachine: Creating machine...
	I0120 12:55:00.473943  999146 main.go:141] libmachine: (newest-cni-476001) Calling .Create
	I0120 12:55:00.474095  999146 main.go:141] libmachine: (newest-cni-476001) creating KVM machine...
	I0120 12:55:00.474117  999146 main.go:141] libmachine: (newest-cni-476001) creating network...
	I0120 12:55:00.475576  999146 main.go:141] libmachine: (newest-cni-476001) DBG | found existing default KVM network
	I0120 12:55:00.477080  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:00.476895  999169 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dd:36:f0} reservation:<nil>}
	I0120 12:55:00.478617  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:00.478515  999169 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205720}
	I0120 12:55:00.478737  999146 main.go:141] libmachine: (newest-cni-476001) DBG | created network xml: 
	I0120 12:55:00.478755  999146 main.go:141] libmachine: (newest-cni-476001) DBG | <network>
	I0120 12:55:00.478765  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   <name>mk-newest-cni-476001</name>
	I0120 12:55:00.478774  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   <dns enable='no'/>
	I0120 12:55:00.478787  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   
	I0120 12:55:00.478803  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0120 12:55:00.478818  999146 main.go:141] libmachine: (newest-cni-476001) DBG |     <dhcp>
	I0120 12:55:00.478831  999146 main.go:141] libmachine: (newest-cni-476001) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0120 12:55:00.478843  999146 main.go:141] libmachine: (newest-cni-476001) DBG |     </dhcp>
	I0120 12:55:00.478853  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   </ip>
	I0120 12:55:00.478865  999146 main.go:141] libmachine: (newest-cni-476001) DBG |   
	I0120 12:55:00.478872  999146 main.go:141] libmachine: (newest-cni-476001) DBG | </network>
	I0120 12:55:00.478911  999146 main.go:141] libmachine: (newest-cni-476001) DBG | 
	I0120 12:55:00.484448  999146 main.go:141] libmachine: (newest-cni-476001) DBG | trying to create private KVM network mk-newest-cni-476001 192.168.50.0/24...
	I0120 12:55:00.563728  999146 main.go:141] libmachine: (newest-cni-476001) DBG | private KVM network mk-newest-cni-476001 192.168.50.0/24 created
	I0120 12:55:00.563786  999146 main.go:141] libmachine: (newest-cni-476001) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001 ...
	I0120 12:55:00.563850  999146 main.go:141] libmachine: (newest-cni-476001) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:55:00.564015  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:00.563817  999169 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:55:00.564041  999146 main.go:141] libmachine: (newest-cni-476001) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:55:00.919863  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:00.919662  999169 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa...
	I0120 12:55:01.005441  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:01.005330  999169 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/newest-cni-476001.rawdisk...
	I0120 12:55:01.005471  999146 main.go:141] libmachine: (newest-cni-476001) DBG | Writing magic tar header
	I0120 12:55:01.005491  999146 main.go:141] libmachine: (newest-cni-476001) DBG | Writing SSH key tar header
	I0120 12:55:01.005500  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:01.005476  999169 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001 ...
	I0120 12:55:01.005629  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001
	I0120 12:55:01.005665  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:55:01.005676  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001 (perms=drwx------)
	I0120 12:55:01.005693  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:55:01.005701  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:55:01.005709  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:55:01.005723  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:55:01.005739  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:55:01.005758  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:55:01.005769  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:55:01.005776  999146 main.go:141] libmachine: (newest-cni-476001) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:55:01.005788  999146 main.go:141] libmachine: (newest-cni-476001) creating domain...
	I0120 12:55:01.005799  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home/jenkins
	I0120 12:55:01.005813  999146 main.go:141] libmachine: (newest-cni-476001) DBG | checking permissions on dir: /home
	I0120 12:55:01.005826  999146 main.go:141] libmachine: (newest-cni-476001) DBG | skipping /home - not owner
	I0120 12:55:01.006922  999146 main.go:141] libmachine: (newest-cni-476001) define libvirt domain using xml: 
	I0120 12:55:01.006948  999146 main.go:141] libmachine: (newest-cni-476001) <domain type='kvm'>
	I0120 12:55:01.006956  999146 main.go:141] libmachine: (newest-cni-476001)   <name>newest-cni-476001</name>
	I0120 12:55:01.006961  999146 main.go:141] libmachine: (newest-cni-476001)   <memory unit='MiB'>2200</memory>
	I0120 12:55:01.006966  999146 main.go:141] libmachine: (newest-cni-476001)   <vcpu>2</vcpu>
	I0120 12:55:01.006970  999146 main.go:141] libmachine: (newest-cni-476001)   <features>
	I0120 12:55:01.006977  999146 main.go:141] libmachine: (newest-cni-476001)     <acpi/>
	I0120 12:55:01.006986  999146 main.go:141] libmachine: (newest-cni-476001)     <apic/>
	I0120 12:55:01.006993  999146 main.go:141] libmachine: (newest-cni-476001)     <pae/>
	I0120 12:55:01.007000  999146 main.go:141] libmachine: (newest-cni-476001)     
	I0120 12:55:01.007011  999146 main.go:141] libmachine: (newest-cni-476001)   </features>
	I0120 12:55:01.007018  999146 main.go:141] libmachine: (newest-cni-476001)   <cpu mode='host-passthrough'>
	I0120 12:55:01.007023  999146 main.go:141] libmachine: (newest-cni-476001)   
	I0120 12:55:01.007030  999146 main.go:141] libmachine: (newest-cni-476001)   </cpu>
	I0120 12:55:01.007034  999146 main.go:141] libmachine: (newest-cni-476001)   <os>
	I0120 12:55:01.007043  999146 main.go:141] libmachine: (newest-cni-476001)     <type>hvm</type>
	I0120 12:55:01.007078  999146 main.go:141] libmachine: (newest-cni-476001)     <boot dev='cdrom'/>
	I0120 12:55:01.007112  999146 main.go:141] libmachine: (newest-cni-476001)     <boot dev='hd'/>
	I0120 12:55:01.007155  999146 main.go:141] libmachine: (newest-cni-476001)     <bootmenu enable='no'/>
	I0120 12:55:01.007183  999146 main.go:141] libmachine: (newest-cni-476001)   </os>
	I0120 12:55:01.007197  999146 main.go:141] libmachine: (newest-cni-476001)   <devices>
	I0120 12:55:01.007208  999146 main.go:141] libmachine: (newest-cni-476001)     <disk type='file' device='cdrom'>
	I0120 12:55:01.007223  999146 main.go:141] libmachine: (newest-cni-476001)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/boot2docker.iso'/>
	I0120 12:55:01.007236  999146 main.go:141] libmachine: (newest-cni-476001)       <target dev='hdc' bus='scsi'/>
	I0120 12:55:01.007249  999146 main.go:141] libmachine: (newest-cni-476001)       <readonly/>
	I0120 12:55:01.007265  999146 main.go:141] libmachine: (newest-cni-476001)     </disk>
	I0120 12:55:01.007278  999146 main.go:141] libmachine: (newest-cni-476001)     <disk type='file' device='disk'>
	I0120 12:55:01.007292  999146 main.go:141] libmachine: (newest-cni-476001)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:55:01.007308  999146 main.go:141] libmachine: (newest-cni-476001)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/newest-cni-476001.rawdisk'/>
	I0120 12:55:01.007320  999146 main.go:141] libmachine: (newest-cni-476001)       <target dev='hda' bus='virtio'/>
	I0120 12:55:01.007331  999146 main.go:141] libmachine: (newest-cni-476001)     </disk>
	I0120 12:55:01.007341  999146 main.go:141] libmachine: (newest-cni-476001)     <interface type='network'>
	I0120 12:55:01.007347  999146 main.go:141] libmachine: (newest-cni-476001)       <source network='mk-newest-cni-476001'/>
	I0120 12:55:01.007361  999146 main.go:141] libmachine: (newest-cni-476001)       <model type='virtio'/>
	I0120 12:55:01.007366  999146 main.go:141] libmachine: (newest-cni-476001)     </interface>
	I0120 12:55:01.007373  999146 main.go:141] libmachine: (newest-cni-476001)     <interface type='network'>
	I0120 12:55:01.007379  999146 main.go:141] libmachine: (newest-cni-476001)       <source network='default'/>
	I0120 12:55:01.007386  999146 main.go:141] libmachine: (newest-cni-476001)       <model type='virtio'/>
	I0120 12:55:01.007391  999146 main.go:141] libmachine: (newest-cni-476001)     </interface>
	I0120 12:55:01.007398  999146 main.go:141] libmachine: (newest-cni-476001)     <serial type='pty'>
	I0120 12:55:01.007403  999146 main.go:141] libmachine: (newest-cni-476001)       <target port='0'/>
	I0120 12:55:01.007409  999146 main.go:141] libmachine: (newest-cni-476001)     </serial>
	I0120 12:55:01.007442  999146 main.go:141] libmachine: (newest-cni-476001)     <console type='pty'>
	I0120 12:55:01.007465  999146 main.go:141] libmachine: (newest-cni-476001)       <target type='serial' port='0'/>
	I0120 12:55:01.007478  999146 main.go:141] libmachine: (newest-cni-476001)     </console>
	I0120 12:55:01.007490  999146 main.go:141] libmachine: (newest-cni-476001)     <rng model='virtio'>
	I0120 12:55:01.007503  999146 main.go:141] libmachine: (newest-cni-476001)       <backend model='random'>/dev/random</backend>
	I0120 12:55:01.007513  999146 main.go:141] libmachine: (newest-cni-476001)     </rng>
	I0120 12:55:01.007524  999146 main.go:141] libmachine: (newest-cni-476001)     
	I0120 12:55:01.007533  999146 main.go:141] libmachine: (newest-cni-476001)     
	I0120 12:55:01.007540  999146 main.go:141] libmachine: (newest-cni-476001)   </devices>
	I0120 12:55:01.007547  999146 main.go:141] libmachine: (newest-cni-476001) </domain>
	I0120 12:55:01.007557  999146 main.go:141] libmachine: (newest-cni-476001) 
	I0120 12:55:01.012192  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:45:74:7e in network default
	I0120 12:55:01.012795  999146 main.go:141] libmachine: (newest-cni-476001) starting domain...
	I0120 12:55:01.012817  999146 main.go:141] libmachine: (newest-cni-476001) ensuring networks are active...
	I0120 12:55:01.012833  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:01.013544  999146 main.go:141] libmachine: (newest-cni-476001) Ensuring network default is active
	I0120 12:55:01.013912  999146 main.go:141] libmachine: (newest-cni-476001) Ensuring network mk-newest-cni-476001 is active
	I0120 12:55:01.014709  999146 main.go:141] libmachine: (newest-cni-476001) getting domain XML...
	I0120 12:55:01.015540  999146 main.go:141] libmachine: (newest-cni-476001) creating domain...
	I0120 12:55:02.317282  999146 main.go:141] libmachine: (newest-cni-476001) waiting for IP...
	I0120 12:55:02.318118  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:02.318703  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:02.318785  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:02.318707  999169 retry.go:31] will retry after 294.984133ms: waiting for domain to come up
	I0120 12:55:02.616891  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:02.617455  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:02.617483  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:02.617413  999169 retry.go:31] will retry after 335.012955ms: waiting for domain to come up
	I0120 12:55:02.954142  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:02.954722  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:02.954773  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:02.954703  999169 retry.go:31] will retry after 367.710166ms: waiting for domain to come up
	I0120 12:55:03.324331  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:03.325023  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:03.325053  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:03.324992  999169 retry.go:31] will retry after 457.957923ms: waiting for domain to come up
	I0120 12:55:03.784938  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:03.785649  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:03.785719  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:03.785613  999169 retry.go:31] will retry after 483.696152ms: waiting for domain to come up
	I0120 12:55:04.271334  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:04.271932  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:04.271969  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:04.271867  999169 retry.go:31] will retry after 699.233363ms: waiting for domain to come up
	I0120 12:55:04.972638  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:04.973296  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:04.973325  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:04.973268  999169 retry.go:31] will retry after 905.610763ms: waiting for domain to come up
	I0120 12:55:05.880873  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:05.881443  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:05.881473  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:05.881368  999169 retry.go:31] will retry after 1.45453173s: waiting for domain to come up
	I0120 12:55:07.337726  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:07.338149  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:07.338200  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:07.338136  999169 retry.go:31] will retry after 1.823018507s: waiting for domain to come up
	I0120 12:55:09.163098  999146 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:55:09.163625  999146 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:55:09.163654  999146 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:55:09.163591  999169 retry.go:31] will retry after 2.266882731s: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.037317050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715037296369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8796989f-3ee5-46ef-bbdd-8ed5166d3f2d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.037836386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86545d8b-3ba5-4d3c-8e75-dcc78f646bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.037909005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86545d8b-3ba5-4d3c-8e75-dcc78f646bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.038165110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9,PodSandboxId:6da76c72160dc42422af2da1cb465a30787bc0e9aabfa7609300e648ab0dd21e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377694967536339,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-4rknb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3bfad14e-a251-466a-8a85-81508552fc55,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563a2e5ea56d92b2a562b84d363ca731b722b22d001798efb53ca127a7d4d047,PodSandboxId:94228031e5c5c20b557fb596a445de80c7a0b39fa1cafb3a0e2e4c03ba9c304c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376444737642480,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9lvm6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 90ae7aa3-bf43-4fce-bb58-b6f7e0994b20,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40748bebc505c644e345ffd8b0f1e2f8904567950d229f9213a10e79e6ec7ac9,PodSandboxId:8d4a058a90983d24e0bf61bbe405248a63911c5652ecf38e05502599a383fc96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376436400797440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14187f8e-01fd-45ac-a749-82ba272b727f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d97628d36afe279ca127a9afb09193e5c144368b5205bd519eaa9f9aa98137f,PodSandboxId:bd341ed789825828e5143f198468d62ed0214c03cd3a3e0043da4f2545e4ca44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435605298728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8pf2c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 9402090c-afdc-4fd7-a673-155ca87b9afe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31b08c3648046d9d022776b7face16b79a1c8ff1e7f6bf7c85022bf0fbc448,PodSandboxId:a9e22879ef5c9c065ec0da7fe15c4b6d3cd9ea6e6b3011098f9762444239cebf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435466493009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rdj6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7882da6-0b57-402a-a902-6c4e6a8c6cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad8bc1026f1bdb7177032b82e72d0c546217740b8b1dc87d93c1b94d3a6e95b,PodSandboxId:4df0b5508f4f2f08f9d16ea11aeb11a8316012fbc05318c803f657791b3ff713,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376434804870471,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpn56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2daf0a184f3621895e927061be963e75a238a6baa0776ff99c5d196220d1b2c,PodSandboxId:f42ff19a45fd2b6cfda40f6aced1865ffcba4d2d57f74e44e6f88c78abbd956a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376423827725075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbf9c4a987bb2466a139975d9e1afa1bf6fcec747b91d0d2afe39556318f516,PodSandboxId:c46b37f716222d06b991bb23f24fff0de8311f793ee64a6f484504938fbe0981,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376423855069559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab02706afc351bbff75bacaf97e43f14,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3690bb3e6854d305c4805423961a11049733ec67e0ed2dc51a6421233628a,PodSandboxId:6d65c6fd36215565ae8dab7e0467557da5fcb828092abbdc852bc24660f76c0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376423831364502,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b5b9620f12a58d628eb3c18c2c4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e6cb39fe29be013ebee1fe2ada251167dfbeb46182887c0d2577fbeb2f6bc0,PodSandboxId:67a6ee21de304af77a0f8be36e6ed0bf7da423ecd150ea9015750b8e25366b84,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376423767922456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8390da1bf234e29c7ae67d55e30de9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03570914c73e6ba99d2ef0edd34765e99d61cbab1366bdd62261881c417a99f,PodSandboxId:3c484088b74c1bdf63332e375d54a21db1d1ef42315c6e1c233005267de5a9b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376135144067003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86545d8b-3ba5-4d3c-8e75-dcc78f646bbc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.077323453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=216cc53b-8081-4e68-861c-9cf966297b48 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.077463454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=216cc53b-8081-4e68-861c-9cf966297b48 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.078765959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65c7d407-8c6a-40a7-bb21-bd72e231899b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.079137854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715079118822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65c7d407-8c6a-40a7-bb21-bd72e231899b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.079664205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0c777c6-286e-4efc-8eeb-d158b999811e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.079717690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0c777c6-286e-4efc-8eeb-d158b999811e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.079957974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9,PodSandboxId:6da76c72160dc42422af2da1cb465a30787bc0e9aabfa7609300e648ab0dd21e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377694967536339,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-4rknb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3bfad14e-a251-466a-8a85-81508552fc55,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563a2e5ea56d92b2a562b84d363ca731b722b22d001798efb53ca127a7d4d047,PodSandboxId:94228031e5c5c20b557fb596a445de80c7a0b39fa1cafb3a0e2e4c03ba9c304c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376444737642480,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9lvm6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 90ae7aa3-bf43-4fce-bb58-b6f7e0994b20,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40748bebc505c644e345ffd8b0f1e2f8904567950d229f9213a10e79e6ec7ac9,PodSandboxId:8d4a058a90983d24e0bf61bbe405248a63911c5652ecf38e05502599a383fc96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376436400797440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14187f8e-01fd-45ac-a749-82ba272b727f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d97628d36afe279ca127a9afb09193e5c144368b5205bd519eaa9f9aa98137f,PodSandboxId:bd341ed789825828e5143f198468d62ed0214c03cd3a3e0043da4f2545e4ca44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435605298728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8pf2c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 9402090c-afdc-4fd7-a673-155ca87b9afe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31b08c3648046d9d022776b7face16b79a1c8ff1e7f6bf7c85022bf0fbc448,PodSandboxId:a9e22879ef5c9c065ec0da7fe15c4b6d3cd9ea6e6b3011098f9762444239cebf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435466493009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rdj6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7882da6-0b57-402a-a902-6c4e6a8c6cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad8bc1026f1bdb7177032b82e72d0c546217740b8b1dc87d93c1b94d3a6e95b,PodSandboxId:4df0b5508f4f2f08f9d16ea11aeb11a8316012fbc05318c803f657791b3ff713,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376434804870471,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpn56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2daf0a184f3621895e927061be963e75a238a6baa0776ff99c5d196220d1b2c,PodSandboxId:f42ff19a45fd2b6cfda40f6aced1865ffcba4d2d57f74e44e6f88c78abbd956a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376423827725075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbf9c4a987bb2466a139975d9e1afa1bf6fcec747b91d0d2afe39556318f516,PodSandboxId:c46b37f716222d06b991bb23f24fff0de8311f793ee64a6f484504938fbe0981,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376423855069559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab02706afc351bbff75bacaf97e43f14,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3690bb3e6854d305c4805423961a11049733ec67e0ed2dc51a6421233628a,PodSandboxId:6d65c6fd36215565ae8dab7e0467557da5fcb828092abbdc852bc24660f76c0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376423831364502,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b5b9620f12a58d628eb3c18c2c4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e6cb39fe29be013ebee1fe2ada251167dfbeb46182887c0d2577fbeb2f6bc0,PodSandboxId:67a6ee21de304af77a0f8be36e6ed0bf7da423ecd150ea9015750b8e25366b84,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376423767922456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8390da1bf234e29c7ae67d55e30de9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03570914c73e6ba99d2ef0edd34765e99d61cbab1366bdd62261881c417a99f,PodSandboxId:3c484088b74c1bdf63332e375d54a21db1d1ef42315c6e1c233005267de5a9b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376135144067003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0c777c6-286e-4efc-8eeb-d158b999811e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.122545211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93415787-5c07-4dda-b566-f46a23c79c54 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.123130150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93415787-5c07-4dda-b566-f46a23c79c54 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.125682783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e924a7d-c8c3-494b-98ed-5bac3baa6185 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.126356134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715126333542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e924a7d-c8c3-494b-98ed-5bac3baa6185 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.127145273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d56a53a-1621-4702-99d3-e288a5176d92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.127196164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d56a53a-1621-4702-99d3-e288a5176d92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.127490075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9,PodSandboxId:6da76c72160dc42422af2da1cb465a30787bc0e9aabfa7609300e648ab0dd21e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377694967536339,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-4rknb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3bfad14e-a251-466a-8a85-81508552fc55,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563a2e5ea56d92b2a562b84d363ca731b722b22d001798efb53ca127a7d4d047,PodSandboxId:94228031e5c5c20b557fb596a445de80c7a0b39fa1cafb3a0e2e4c03ba9c304c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376444737642480,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9lvm6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 90ae7aa3-bf43-4fce-bb58-b6f7e0994b20,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40748bebc505c644e345ffd8b0f1e2f8904567950d229f9213a10e79e6ec7ac9,PodSandboxId:8d4a058a90983d24e0bf61bbe405248a63911c5652ecf38e05502599a383fc96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376436400797440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14187f8e-01fd-45ac-a749-82ba272b727f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d97628d36afe279ca127a9afb09193e5c144368b5205bd519eaa9f9aa98137f,PodSandboxId:bd341ed789825828e5143f198468d62ed0214c03cd3a3e0043da4f2545e4ca44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435605298728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8pf2c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 9402090c-afdc-4fd7-a673-155ca87b9afe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31b08c3648046d9d022776b7face16b79a1c8ff1e7f6bf7c85022bf0fbc448,PodSandboxId:a9e22879ef5c9c065ec0da7fe15c4b6d3cd9ea6e6b3011098f9762444239cebf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435466493009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rdj6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7882da6-0b57-402a-a902-6c4e6a8c6cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad8bc1026f1bdb7177032b82e72d0c546217740b8b1dc87d93c1b94d3a6e95b,PodSandboxId:4df0b5508f4f2f08f9d16ea11aeb11a8316012fbc05318c803f657791b3ff713,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376434804870471,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpn56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2daf0a184f3621895e927061be963e75a238a6baa0776ff99c5d196220d1b2c,PodSandboxId:f42ff19a45fd2b6cfda40f6aced1865ffcba4d2d57f74e44e6f88c78abbd956a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376423827725075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbf9c4a987bb2466a139975d9e1afa1bf6fcec747b91d0d2afe39556318f516,PodSandboxId:c46b37f716222d06b991bb23f24fff0de8311f793ee64a6f484504938fbe0981,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376423855069559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab02706afc351bbff75bacaf97e43f14,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3690bb3e6854d305c4805423961a11049733ec67e0ed2dc51a6421233628a,PodSandboxId:6d65c6fd36215565ae8dab7e0467557da5fcb828092abbdc852bc24660f76c0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376423831364502,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b5b9620f12a58d628eb3c18c2c4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e6cb39fe29be013ebee1fe2ada251167dfbeb46182887c0d2577fbeb2f6bc0,PodSandboxId:67a6ee21de304af77a0f8be36e6ed0bf7da423ecd150ea9015750b8e25366b84,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376423767922456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8390da1bf234e29c7ae67d55e30de9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03570914c73e6ba99d2ef0edd34765e99d61cbab1366bdd62261881c417a99f,PodSandboxId:3c484088b74c1bdf63332e375d54a21db1d1ef42315c6e1c233005267de5a9b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376135144067003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d56a53a-1621-4702-99d3-e288a5176d92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.170838022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16190439-40b2-47c4-9357-1707ee5b006a name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.170910049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16190439-40b2-47c4-9357-1707ee5b006a name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.171946975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed834a97-971e-448e-8e78-94553f0f176e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.172328585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715172305487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed834a97-971e-448e-8e78-94553f0f176e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.172811051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d5b9a69-ec8d-4303-a975-4275c389514d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.172867799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d5b9a69-ec8d-4303-a975-4275c389514d name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:15 no-preload-496524 crio[729]: time="2025-01-20 12:55:15.173161156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9,PodSandboxId:6da76c72160dc42422af2da1cb465a30787bc0e9aabfa7609300e648ab0dd21e,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377694967536339,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-4rknb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3bfad14e-a251-466a-8a85-81508552fc55,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563a2e5ea56d92b2a562b84d363ca731b722b22d001798efb53ca127a7d4d047,PodSandboxId:94228031e5c5c20b557fb596a445de80c7a0b39fa1cafb3a0e2e4c03ba9c304c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376444737642480,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-9lvm6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 90ae7aa3-bf43-4fce-bb58-b6f7e0994b20,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40748bebc505c644e345ffd8b0f1e2f8904567950d229f9213a10e79e6ec7ac9,PodSandboxId:8d4a058a90983d24e0bf61bbe405248a63911c5652ecf38e05502599a383fc96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376436400797440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14187f8e-01fd-45ac-a749-82ba272b727f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d97628d36afe279ca127a9afb09193e5c144368b5205bd519eaa9f9aa98137f,PodSandboxId:bd341ed789825828e5143f198468d62ed0214c03cd3a3e0043da4f2545e4ca44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435605298728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8pf2c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 9402090c-afdc-4fd7-a673-155ca87b9afe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31b08c3648046d9d022776b7face16b79a1c8ff1e7f6bf7c85022bf0fbc448,PodSandboxId:a9e22879ef5c9c065ec0da7fe15c4b6d3cd9ea6e6b3011098f9762444239cebf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376435466493009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rdj6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7882da6-0b57-402a-a902-6c4e6a8c6cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad8bc1026f1bdb7177032b82e72d0c546217740b8b1dc87d93c1b94d3a6e95b,PodSandboxId:4df0b5508f4f2f08f9d16ea11aeb11a8316012fbc05318c803f657791b3ff713,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376434804870471,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpn56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2daf0a184f3621895e927061be963e75a238a6baa0776ff99c5d196220d1b2c,PodSandboxId:f42ff19a45fd2b6cfda40f6aced1865ffcba4d2d57f74e44e6f88c78abbd956a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376423827725075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbf9c4a987bb2466a139975d9e1afa1bf6fcec747b91d0d2afe39556318f516,PodSandboxId:c46b37f716222d06b991bb23f24fff0de8311f793ee64a6f484504938fbe0981,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f
35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376423855069559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab02706afc351bbff75bacaf97e43f14,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3690bb3e6854d305c4805423961a11049733ec67e0ed2dc51a6421233628a,PodSandboxId:6d65c6fd36215565ae8dab7e0467557da5fcb828092abbdc852bc24660f76c0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376423831364502,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b5b9620f12a58d628eb3c18c2c4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e6cb39fe29be013ebee1fe2ada251167dfbeb46182887c0d2577fbeb2f6bc0,PodSandboxId:67a6ee21de304af77a0f8be36e6ed0bf7da423ecd150ea9015750b8e25366b84,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376423767922456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8390da1bf234e29c7ae67d55e30de9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03570914c73e6ba99d2ef0edd34765e99d61cbab1366bdd62261881c417a99f,PodSandboxId:3c484088b74c1bdf63332e375d54a21db1d1ef42315c6e1c233005267de5a9b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376135144067003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-496524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edc389504e3f38bed9f1ad992199b4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d5b9a69-ec8d-4303-a975-4275c389514d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a8bcc3a649de1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   9                   6da76c72160dc       dashboard-metrics-scraper-86c6bf9756-4rknb
	563a2e5ea56d9       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   94228031e5c5c       kubernetes-dashboard-7779f9b69b-9lvm6
	40748bebc505c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   8d4a058a90983       storage-provisioner
	7d97628d36afe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   bd341ed789825       coredns-668d6bf9bc-8pf2c
	2a31b08c36480       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   a9e22879ef5c9       coredns-668d6bf9bc-rdj6t
	0ad8bc1026f1b       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           21 minutes ago      Running             kube-proxy                  0                   4df0b5508f4f2       kube-proxy-dpn56
	bbbf9c4a987bb       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   c46b37f716222       etcd-no-preload-496524
	38c3690bb3e68       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           21 minutes ago      Running             kube-scheduler              2                   6d65c6fd36215       kube-scheduler-no-preload-496524
	c2daf0a184f36       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           21 minutes ago      Running             kube-apiserver              2                   f42ff19a45fd2       kube-apiserver-no-preload-496524
	17e6cb39fe29b       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           21 minutes ago      Running             kube-controller-manager     2                   67a6ee21de304       kube-controller-manager-no-preload-496524
	e03570914c73e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           26 minutes ago      Exited              kube-apiserver              1                   3c484088b74c1       kube-apiserver-no-preload-496524
	
	
	==> coredns [2a31b08c3648046d9d022776b7face16b79a1c8ff1e7f6bf7c85022bf0fbc448] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7d97628d36afe279ca127a9afb09193e5c144368b5205bd519eaa9f9aa98137f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-496524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-496524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=no-preload-496524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_33_49_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:33:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-496524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:55:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:53:44 +0000   Mon, 20 Jan 2025 12:33:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:53:44 +0000   Mon, 20 Jan 2025 12:33:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:53:44 +0000   Mon, 20 Jan 2025 12:33:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:53:44 +0000   Mon, 20 Jan 2025 12:33:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.107
	  Hostname:    no-preload-496524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f66713b78ea941b1ace82f4f00f43c91
	  System UUID:                f66713b7-8ea9-41b1-ace8-2f4f00f43c91
	  Boot ID:                    0b88fbfa-1090-4661-9b37-3fdc90db2bf8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-8pf2c                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-rdj6t                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-496524                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-496524              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-496524     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-dpn56                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-496524              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-dbx78                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-4rknb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-9lvm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-496524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-496524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-496524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node no-preload-496524 event: Registered Node no-preload-496524 in Controller
	
	
	==> dmesg <==
	[  +4.865981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.063833] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.540166] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.985013] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.053803] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047678] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.146756] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.138245] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.236855] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[ +15.548196] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.064005] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958263] systemd-fstab-generator[1459]: Ignoring "noauto" option for root device
	[  +3.240525] kauditd_printk_skb: 97 callbacks suppressed
	[Jan20 12:29] kauditd_printk_skb: 77 callbacks suppressed
	[  +9.257329] kauditd_printk_skb: 12 callbacks suppressed
	[Jan20 12:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.409257] systemd-fstab-generator[3285]: Ignoring "noauto" option for root device
	[  +4.597568] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.452396] systemd-fstab-generator[3623]: Ignoring "noauto" option for root device
	[  +5.389059] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	[  +0.087781] kauditd_printk_skb: 14 callbacks suppressed
	[Jan20 12:34] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.500899] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [bbbf9c4a987bb2466a139975d9e1afa1bf6fcec747b91d0d2afe39556318f516] <==
	{"level":"info","ts":"2025-01-20T12:33:45.210789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T12:33:45.210776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T12:33:45.211421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T12:33:45.212059Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.107:2379"}
	{"level":"info","ts":"2025-01-20T12:33:45.212679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"740117290cb61fd6","local-member-id":"7a1421f129b0f3c4","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T12:33:45.212913Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T12:33:45.213006Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T12:34:02.738057Z","caller":"traceutil/trace.go:171","msg":"trace[1674318736] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"118.340622ms","start":"2025-01-20T12:34:02.619697Z","end":"2025-01-20T12:34:02.738038Z","steps":["trace[1674318736] 'process raft request'  (duration: 118.217272ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:34:03.751428Z","caller":"traceutil/trace.go:171","msg":"trace[1618808296] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"121.994744ms","start":"2025-01-20T12:34:03.629353Z","end":"2025-01-20T12:34:03.751348Z","steps":["trace[1618808296] 'process raft request'  (duration: 121.575346ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:34:06.884330Z","caller":"traceutil/trace.go:171","msg":"trace[757178536] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"114.964618ms","start":"2025-01-20T12:34:06.769344Z","end":"2025-01-20T12:34:06.884309Z","steps":["trace[757178536] 'process raft request'  (duration: 114.59642ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:34:08.068115Z","caller":"traceutil/trace.go:171","msg":"trace[1930548478] linearizableReadLoop","detail":"{readStateIndex:519; appliedIndex:518; }","duration":"113.379919ms","start":"2025-01-20T12:34:07.954721Z","end":"2025-01-20T12:34:08.068101Z","steps":["trace[1930548478] 'read index received'  (duration: 113.239505ms)","trace[1930548478] 'applied index is now lower than readState.Index'  (duration: 139.831µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:34:08.068203Z","caller":"traceutil/trace.go:171","msg":"trace[709414978] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"117.910071ms","start":"2025-01-20T12:34:07.950286Z","end":"2025-01-20T12:34:08.068196Z","steps":["trace[709414978] 'process raft request'  (duration: 117.698655ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:34:08.070311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.637435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-dbx78.181c66a9f244f248\" limit:1 ","response":"range_response_count:1 size:814"}
	{"level":"info","ts":"2025-01-20T12:34:08.070364Z","caller":"traceutil/trace.go:171","msg":"trace[640882056] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-dbx78.181c66a9f244f248; range_end:; response_count:1; response_revision:503; }","duration":"115.65134ms","start":"2025-01-20T12:34:07.954696Z","end":"2025-01-20T12:34:08.070348Z","steps":["trace[640882056] 'agreement among raft nodes before linearized reading'  (duration: 113.583882ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:34:08.327589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.919253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:34:08.327650Z","caller":"traceutil/trace.go:171","msg":"trace[1406233765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:505; }","duration":"128.017522ms","start":"2025-01-20T12:34:08.199618Z","end":"2025-01-20T12:34:08.327635Z","steps":["trace[1406233765] 'range keys from in-memory index tree'  (duration: 127.869114ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:43:45.239489Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2025-01-20T12:43:45.265244Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":834,"took":"24.465135ms","hash":1443359744,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2883584,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-20T12:43:45.265661Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1443359744,"revision":834,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T12:48:45.247663Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1085}
	{"level":"info","ts":"2025-01-20T12:48:45.252204Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1085,"took":"3.742348ms","hash":288529669,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-20T12:48:45.252315Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":288529669,"revision":1085,"compact-revision":834}
	{"level":"info","ts":"2025-01-20T12:53:45.255449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1338}
	{"level":"info","ts":"2025-01-20T12:53:45.259682Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1338,"took":"3.809383ms","hash":1446620419,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T12:53:45.259730Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1446620419,"revision":1338,"compact-revision":1085}
	
	
	==> kernel <==
	 12:55:15 up 26 min,  0 users,  load average: 0.45, 0.21, 0.22
	Linux no-preload-496524 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2daf0a184f3621895e927061be963e75a238a6baa0776ff99c5d196220d1b2c] <==
	I0120 12:51:47.443105       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:51:47.443177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:53:46.441316       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:46.441608       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:53:47.443906       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:47.444219       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 12:53:47.444436       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:53:47.444571       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:53:47.445459       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:53:47.446666       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:54:47.446644       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:47.446780       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 12:54:47.446869       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:47.446936       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:54:47.448071       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:54:47.448162       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e03570914c73e6ba99d2ef0edd34765e99d61cbab1366bdd62261881c417a99f] <==
	W0120 12:33:35.481412       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:35.508563       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:35.657940       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:35.658122       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:35.712457       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:35.941170       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.059415       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.417160       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.526457       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.575275       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.817971       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:39.892063       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.060469       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.134053       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.416143       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.514270       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.534838       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.611642       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.655927       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.760245       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.767791       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.780211       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.810619       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.851449       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:33:40.854972       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [17e6cb39fe29be013ebee1fe2ada251167dfbeb46182887c0d2577fbeb2f6bc0] <==
	E0120 12:50:23.205868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:50:23.313420       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:50:53.213907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:50:53.321581       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:51:23.220883       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:51:23.329934       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:51:53.228196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:51:53.337586       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:23.236161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:23.355827       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:53.244774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:53.363047       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:23.251230       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:23.370798       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:53:44.282752       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-496524"
	E0120 12:53:53.257990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:53.378051       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:23.264270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:23.393980       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:53.271488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:53.402245       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:54:55.910962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="1.664515ms"
	I0120 12:54:57.959920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="714.299µs"
	I0120 12:55:04.576153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="67.896µs"
	I0120 12:55:12.958565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="71.979µs"
	
	
	==> kube-proxy [0ad8bc1026f1bdb7177032b82e72d0c546217740b8b1dc87d93c1b94d3a6e95b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:33:55.203556       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:33:55.219408       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.107"]
	E0120 12:33:55.219474       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:33:55.302477       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:33:55.302540       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:33:55.302570       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:33:55.305050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:33:55.305350       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:33:55.305672       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:33:55.308673       1 config.go:199] "Starting service config controller"
	I0120 12:33:55.308733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:33:55.308768       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:33:55.308773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:33:55.311710       1 config.go:329] "Starting node config controller"
	I0120 12:33:55.311734       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:33:55.412510       1 shared_informer.go:320] Caches are synced for node config
	I0120 12:33:55.412584       1 shared_informer.go:320] Caches are synced for service config
	I0120 12:33:55.412614       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [38c3690bb3e6854d305c4805423961a11049733ec67e0ed2dc51a6421233628a] <==
	W0120 12:33:46.451935       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 12:33:46.452027       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:46.452151       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:33:46.452229       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:46.452358       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 12:33:46.452465       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:46.452563       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0120 12:33:46.453162       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:33:46.453200       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0120 12:33:46.452598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.287633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:33:47.287693       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.546179       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:33:47.546298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.576639       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 12:33:47.576750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.607332       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:33:47.607458       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 12:33:47.734988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:33:47.735035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.742988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:33:47.743043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:33:47.755060       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 12:33:47.755104       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 12:33:50.541792       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:54:41 no-preload-496524 kubelet[3630]: I0120 12:54:41.940430    3630 scope.go:117] "RemoveContainer" containerID="99961c7cdad389d55369ce9ee06354c7d7d747b77fc5be98d08e8b350f6d6a16"
	Jan 20 12:54:41 no-preload-496524 kubelet[3630]: E0120 12:54:41.940878    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4rknb_kubernetes-dashboard(3bfad14e-a251-466a-8a85-81508552fc55)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4rknb" podUID="3bfad14e-a251-466a-8a85-81508552fc55"
	Jan 20 12:54:45 no-preload-496524 kubelet[3630]: E0120 12:54:45.956528    3630 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 12:54:45 no-preload-496524 kubelet[3630]: E0120 12:54:45.956669    3630 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 12:54:45 no-preload-496524 kubelet[3630]: E0120 12:54:45.957296    3630 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f6tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-dbx78_kube-system(c8fb707c-75c2-42b6-802e-52a09222f9ea): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 20 12:54:45 no-preload-496524 kubelet[3630]: E0120 12:54:45.958682    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-dbx78" podUID="c8fb707c-75c2-42b6-802e-52a09222f9ea"
	Jan 20 12:54:48 no-preload-496524 kubelet[3630]: E0120 12:54:48.955907    3630 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 12:54:48 no-preload-496524 kubelet[3630]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 12:54:48 no-preload-496524 kubelet[3630]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 12:54:48 no-preload-496524 kubelet[3630]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 12:54:48 no-preload-496524 kubelet[3630]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 12:54:49 no-preload-496524 kubelet[3630]: E0120 12:54:49.308467    3630 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377689308129964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:54:49 no-preload-496524 kubelet[3630]: E0120 12:54:49.308547    3630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377689308129964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:54:54 no-preload-496524 kubelet[3630]: I0120 12:54:54.940992    3630 scope.go:117] "RemoveContainer" containerID="99961c7cdad389d55369ce9ee06354c7d7d747b77fc5be98d08e8b350f6d6a16"
	Jan 20 12:54:55 no-preload-496524 kubelet[3630]: I0120 12:54:55.892080    3630 scope.go:117] "RemoveContainer" containerID="99961c7cdad389d55369ce9ee06354c7d7d747b77fc5be98d08e8b350f6d6a16"
	Jan 20 12:54:55 no-preload-496524 kubelet[3630]: I0120 12:54:55.895148    3630 scope.go:117] "RemoveContainer" containerID="a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9"
	Jan 20 12:54:55 no-preload-496524 kubelet[3630]: E0120 12:54:55.895566    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4rknb_kubernetes-dashboard(3bfad14e-a251-466a-8a85-81508552fc55)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4rknb" podUID="3bfad14e-a251-466a-8a85-81508552fc55"
	Jan 20 12:54:57 no-preload-496524 kubelet[3630]: E0120 12:54:57.942597    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-dbx78" podUID="c8fb707c-75c2-42b6-802e-52a09222f9ea"
	Jan 20 12:54:59 no-preload-496524 kubelet[3630]: E0120 12:54:59.310723    3630 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377699309970728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:54:59 no-preload-496524 kubelet[3630]: E0120 12:54:59.311442    3630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377699309970728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:04 no-preload-496524 kubelet[3630]: I0120 12:55:04.558828    3630 scope.go:117] "RemoveContainer" containerID="a8bcc3a649de1d02d580cfbe9859e5e81f3cd6662626451fe0fdc363e02b2fc9"
	Jan 20 12:55:04 no-preload-496524 kubelet[3630]: E0120 12:55:04.559007    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-4rknb_kubernetes-dashboard(3bfad14e-a251-466a-8a85-81508552fc55)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-4rknb" podUID="3bfad14e-a251-466a-8a85-81508552fc55"
	Jan 20 12:55:09 no-preload-496524 kubelet[3630]: E0120 12:55:09.312976    3630 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377709312624453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:09 no-preload-496524 kubelet[3630]: E0120 12:55:09.313018    3630 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377709312624453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:12 no-preload-496524 kubelet[3630]: E0120 12:55:12.942338    3630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-dbx78" podUID="c8fb707c-75c2-42b6-802e-52a09222f9ea"
	
	
	==> kubernetes-dashboard [563a2e5ea56d92b2a562b84d363ca731b722b22d001798efb53ca127a7d4d047] <==
	2025/01/20 12:43:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:43:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40748bebc505c644e345ffd8b0f1e2f8904567950d229f9213a10e79e6ec7ac9] <==
	I0120 12:33:56.549164       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:33:56.560032       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:33:56.560140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:33:56.573878       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:33:56.574191       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-496524_777f67a2-a2bd-4baa-aae5-84ddb318caa8!
	I0120 12:33:56.574585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d47ee1d-5b90-4adb-9148-4bd647828ff9", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-496524_777f67a2-a2bd-4baa-aae5-84ddb318caa8 became leader
	I0120 12:33:56.675520       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-496524_777f67a2-a2bd-4baa-aae5-84ddb318caa8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496524 -n no-preload-496524
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-496524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-dbx78
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-496524 describe pod metrics-server-f79f97bbb-dbx78
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-496524 describe pod metrics-server-f79f97bbb-dbx78: exit status 1 (65.219726ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-dbx78" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-496524 describe pod metrics-server-f79f97bbb-dbx78: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1620.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1645.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-987349 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-987349 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (27m22.894120116s)

                                                
                                                
-- stdout --
	* [embed-certs-987349] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-987349" primary control-plane node in "embed-certs-987349" cluster
	* Restarting existing kvm2 VM for "embed-certs-987349" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-987349 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:29:17.321924  992635 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:29:17.322014  992635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:29:17.322022  992635 out.go:358] Setting ErrFile to fd 2...
	I0120 12:29:17.322026  992635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:29:17.322220  992635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:29:17.322778  992635 out.go:352] Setting JSON to false
	I0120 12:29:17.323762  992635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18700,"bootTime":1737357457,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:29:17.323869  992635 start.go:139] virtualization: kvm guest
	I0120 12:29:17.326037  992635 out.go:177] * [embed-certs-987349] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:29:17.327687  992635 notify.go:220] Checking for updates...
	I0120 12:29:17.327725  992635 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:29:17.329056  992635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:29:17.330322  992635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:29:17.331511  992635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:29:17.332692  992635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:29:17.333785  992635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:29:17.335328  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:29:17.335708  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:29:17.335750  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:17.350420  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0120 12:29:17.350859  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:17.351465  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:29:17.351486  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:17.351872  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:17.352086  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:17.352418  992635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:29:17.352698  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:29:17.352768  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:17.367722  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0120 12:29:17.368111  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:17.368652  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:29:17.368675  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:17.368981  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:17.369179  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:17.405110  992635 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:29:17.406345  992635 start.go:297] selected driver: kvm2
	I0120 12:29:17.406360  992635 start.go:901] validating driver "kvm2" against &{Name:embed-certs-987349 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-987349 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:29:17.406502  992635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:29:17.407420  992635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:29:17.407525  992635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:29:17.422870  992635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:29:17.423297  992635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:29:17.423332  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:29:17.423397  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:29:17.423436  992635 start.go:340] cluster config:
	{Name:embed-certs-987349 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-987349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:29:17.423586  992635 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:29:17.425164  992635 out.go:177] * Starting "embed-certs-987349" primary control-plane node in "embed-certs-987349" cluster
	I0120 12:29:17.426255  992635 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:29:17.426291  992635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:29:17.426301  992635 cache.go:56] Caching tarball of preloaded images
	I0120 12:29:17.426399  992635 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:29:17.426414  992635 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:29:17.426544  992635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/config.json ...
	I0120 12:29:17.426753  992635 start.go:360] acquireMachinesLock for embed-certs-987349: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:29:17.426802  992635 start.go:364] duration metric: took 28.964µs to acquireMachinesLock for "embed-certs-987349"
	I0120 12:29:17.426823  992635 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:29:17.426833  992635 fix.go:54] fixHost starting: 
	I0120 12:29:17.427144  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:29:17.427182  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:29:17.441629  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0120 12:29:17.442036  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:29:17.442497  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:29:17.442540  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:29:17.442994  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:29:17.443203  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:17.443366  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:29:17.445839  992635 fix.go:112] recreateIfNeeded on embed-certs-987349: state=Stopped err=<nil>
	I0120 12:29:17.445914  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	W0120 12:29:17.446345  992635 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:29:17.448011  992635 out.go:177] * Restarting existing kvm2 VM for "embed-certs-987349" ...
	I0120 12:29:17.449326  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Start
	I0120 12:29:17.449543  992635 main.go:141] libmachine: (embed-certs-987349) starting domain...
	I0120 12:29:17.449558  992635 main.go:141] libmachine: (embed-certs-987349) ensuring networks are active...
	I0120 12:29:17.450317  992635 main.go:141] libmachine: (embed-certs-987349) Ensuring network default is active
	I0120 12:29:17.450650  992635 main.go:141] libmachine: (embed-certs-987349) Ensuring network mk-embed-certs-987349 is active
	I0120 12:29:17.450976  992635 main.go:141] libmachine: (embed-certs-987349) getting domain XML...
	I0120 12:29:17.451655  992635 main.go:141] libmachine: (embed-certs-987349) creating domain...
	I0120 12:29:18.654791  992635 main.go:141] libmachine: (embed-certs-987349) waiting for IP...
	I0120 12:29:18.655818  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:18.656384  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:18.656516  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:18.656378  992671 retry.go:31] will retry after 233.538687ms: waiting for domain to come up
	I0120 12:29:18.891875  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:18.892373  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:18.892409  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:18.892316  992671 retry.go:31] will retry after 294.648004ms: waiting for domain to come up
	I0120 12:29:19.188996  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:19.189532  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:19.189562  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:19.189498  992671 retry.go:31] will retry after 402.425129ms: waiting for domain to come up
	I0120 12:29:19.593074  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:19.593475  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:19.593530  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:19.593436  992671 retry.go:31] will retry after 565.11575ms: waiting for domain to come up
	I0120 12:29:20.160301  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:20.160750  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:20.160775  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:20.160697  992671 retry.go:31] will retry after 510.578277ms: waiting for domain to come up
	I0120 12:29:20.673217  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:20.673716  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:20.673762  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:20.673721  992671 retry.go:31] will retry after 914.115534ms: waiting for domain to come up
	I0120 12:29:21.589874  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:21.590432  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:21.590452  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:21.590393  992671 retry.go:31] will retry after 766.720015ms: waiting for domain to come up
	I0120 12:29:22.358298  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:22.358881  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:22.358906  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:22.358852  992671 retry.go:31] will retry after 1.294554678s: waiting for domain to come up
	I0120 12:29:23.655246  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:23.655792  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:23.655827  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:23.655753  992671 retry.go:31] will retry after 1.348630972s: waiting for domain to come up
	I0120 12:29:25.006365  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:25.006842  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:25.006878  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:25.006812  992671 retry.go:31] will retry after 2.284987792s: waiting for domain to come up
	I0120 12:29:27.294035  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:27.294638  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:27.294693  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:27.294612  992671 retry.go:31] will retry after 2.212618885s: waiting for domain to come up
	I0120 12:29:29.508968  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:29.509601  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:29.509633  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:29.509553  992671 retry.go:31] will retry after 2.726226572s: waiting for domain to come up
	I0120 12:29:32.236895  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:32.237404  992635 main.go:141] libmachine: (embed-certs-987349) DBG | unable to find current IP address of domain embed-certs-987349 in network mk-embed-certs-987349
	I0120 12:29:32.237427  992635 main.go:141] libmachine: (embed-certs-987349) DBG | I0120 12:29:32.237385  992671 retry.go:31] will retry after 2.750751947s: waiting for domain to come up
	I0120 12:29:34.991512  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:34.991992  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has current primary IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:34.992024  992635 main.go:141] libmachine: (embed-certs-987349) found domain IP: 192.168.72.170
	I0120 12:29:34.992036  992635 main.go:141] libmachine: (embed-certs-987349) reserving static IP address...
	I0120 12:29:34.992430  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "embed-certs-987349", mac: "52:54:00:17:72:25", ip: "192.168.72.170"} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:34.992460  992635 main.go:141] libmachine: (embed-certs-987349) reserved static IP address 192.168.72.170 for domain embed-certs-987349
	I0120 12:29:34.992481  992635 main.go:141] libmachine: (embed-certs-987349) DBG | skip adding static IP to network mk-embed-certs-987349 - found existing host DHCP lease matching {name: "embed-certs-987349", mac: "52:54:00:17:72:25", ip: "192.168.72.170"}
	I0120 12:29:34.992495  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Getting to WaitForSSH function...
	I0120 12:29:34.992504  992635 main.go:141] libmachine: (embed-certs-987349) waiting for SSH...
	I0120 12:29:34.994541  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:34.994936  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:34.994977  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:34.995089  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Using SSH client type: external
	I0120 12:29:34.995140  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa (-rw-------)
	I0120 12:29:34.995185  992635 main.go:141] libmachine: (embed-certs-987349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:29:34.995210  992635 main.go:141] libmachine: (embed-certs-987349) DBG | About to run SSH command:
	I0120 12:29:34.995220  992635 main.go:141] libmachine: (embed-certs-987349) DBG | exit 0
	I0120 12:29:35.121885  992635 main.go:141] libmachine: (embed-certs-987349) DBG | SSH cmd err, output: <nil>: 
	I0120 12:29:35.122314  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetConfigRaw
	I0120 12:29:35.123125  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetIP
	I0120 12:29:35.125406  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.125751  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.125773  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.126044  992635 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/config.json ...
	I0120 12:29:35.126304  992635 machine.go:93] provisionDockerMachine start ...
	I0120 12:29:35.126327  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:35.126565  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.129058  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.129414  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.129444  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.129579  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:35.129755  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.129886  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.130004  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:35.130152  992635 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:35.130359  992635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0120 12:29:35.130373  992635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:29:35.242227  992635 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:29:35.242262  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetMachineName
	I0120 12:29:35.242558  992635 buildroot.go:166] provisioning hostname "embed-certs-987349"
	I0120 12:29:35.242584  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetMachineName
	I0120 12:29:35.242786  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.245538  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.245841  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.245871  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.245995  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:35.246225  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.246368  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.246589  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:35.246749  992635 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:35.246909  992635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0120 12:29:35.246928  992635 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-987349 && echo "embed-certs-987349" | sudo tee /etc/hostname
	I0120 12:29:35.372276  992635 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-987349
	
	I0120 12:29:35.372312  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.375091  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.375513  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.375541  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.375665  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:35.375883  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.376074  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.376249  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:35.376447  992635 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:35.376608  992635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0120 12:29:35.376623  992635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-987349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-987349/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-987349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:29:35.493917  992635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:29:35.493955  992635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:29:35.494006  992635 buildroot.go:174] setting up certificates
	I0120 12:29:35.494027  992635 provision.go:84] configureAuth start
	I0120 12:29:35.494040  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetMachineName
	I0120 12:29:35.494317  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetIP
	I0120 12:29:35.497063  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.497539  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.497577  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.497748  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.500207  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.500549  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.500584  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.500714  992635 provision.go:143] copyHostCerts
	I0120 12:29:35.500766  992635 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:29:35.500787  992635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:29:35.500849  992635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:29:35.500934  992635 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:29:35.500942  992635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:29:35.500965  992635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:29:35.501017  992635 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:29:35.501024  992635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:29:35.501043  992635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:29:35.501089  992635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.embed-certs-987349 san=[127.0.0.1 192.168.72.170 embed-certs-987349 localhost minikube]
	I0120 12:29:35.718121  992635 provision.go:177] copyRemoteCerts
	I0120 12:29:35.718181  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:29:35.718207  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.721381  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.721769  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.721795  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.721977  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:35.722220  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.722409  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:35.722578  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:29:35.808300  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 12:29:35.830480  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:29:35.851192  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:29:35.872069  992635 provision.go:87] duration metric: took 378.030401ms to configureAuth
	I0120 12:29:35.872104  992635 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:29:35.872259  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:29:35.872331  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:35.875084  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.875413  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:35.875440  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:35.875575  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:35.875778  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.875931  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:35.876088  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:35.876240  992635 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:35.876399  992635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0120 12:29:35.876412  992635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:29:36.095416  992635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:29:36.095446  992635 machine.go:96] duration metric: took 969.124901ms to provisionDockerMachine
	I0120 12:29:36.095462  992635 start.go:293] postStartSetup for "embed-certs-987349" (driver="kvm2")
	I0120 12:29:36.095476  992635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:29:36.095500  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:36.095841  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:29:36.095874  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:36.099240  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.099650  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:36.099681  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.099884  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:36.100061  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:36.100245  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:36.100364  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:29:36.187761  992635 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:29:36.191607  992635 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:29:36.191642  992635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:29:36.191704  992635 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:29:36.191787  992635 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:29:36.191905  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:29:36.200268  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:29:36.221138  992635 start.go:296] duration metric: took 125.660935ms for postStartSetup
	I0120 12:29:36.221182  992635 fix.go:56] duration metric: took 18.79434928s for fixHost
	I0120 12:29:36.221212  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:36.223519  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.223780  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:36.223795  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.223960  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:36.224167  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:36.224302  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:36.224434  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:36.224614  992635 main.go:141] libmachine: Using SSH client type: native
	I0120 12:29:36.224803  992635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.170 22 <nil> <nil>}
	I0120 12:29:36.224819  992635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:29:36.334399  992635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376176.308188852
	
	I0120 12:29:36.334418  992635 fix.go:216] guest clock: 1737376176.308188852
	I0120 12:29:36.334436  992635 fix.go:229] Guest: 2025-01-20 12:29:36.308188852 +0000 UTC Remote: 2025-01-20 12:29:36.221189501 +0000 UTC m=+18.937613500 (delta=86.999351ms)
	I0120 12:29:36.334455  992635 fix.go:200] guest clock delta is within tolerance: 86.999351ms
	I0120 12:29:36.334462  992635 start.go:83] releasing machines lock for "embed-certs-987349", held for 18.907645353s
	I0120 12:29:36.334478  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:36.334708  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetIP
	I0120 12:29:36.337292  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.337679  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:36.337709  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.337816  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:36.338385  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:36.338552  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:29:36.338628  992635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:29:36.338683  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:36.338779  992635 ssh_runner.go:195] Run: cat /version.json
	I0120 12:29:36.338808  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:29:36.341434  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.341581  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.341836  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:36.341866  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.342002  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:36.342018  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:36.342038  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:36.342185  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:29:36.342186  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:36.342360  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:29:36.342555  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:36.342559  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:29:36.342848  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:29:36.342857  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:29:36.452180  992635 ssh_runner.go:195] Run: systemctl --version
	I0120 12:29:36.457489  992635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:29:36.603447  992635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:29:36.609157  992635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:29:36.609231  992635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:29:36.627679  992635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:29:36.627703  992635 start.go:495] detecting cgroup driver to use...
	I0120 12:29:36.627774  992635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:29:36.643646  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:29:36.657040  992635 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:29:36.657084  992635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:29:36.669993  992635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:29:36.682976  992635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:29:36.800733  992635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:29:36.967120  992635 docker.go:233] disabling docker service ...
	I0120 12:29:36.967199  992635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:29:36.980604  992635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:29:36.992347  992635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:29:37.101550  992635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:29:37.220048  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:29:37.233072  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:29:37.250339  992635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:29:37.250415  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.259873  992635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:29:37.259935  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.269473  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.278831  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.288787  992635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:29:37.298463  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.308127  992635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.323158  992635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:29:37.332687  992635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:29:37.341089  992635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:29:37.341133  992635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:29:37.352618  992635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:29:37.361320  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:29:37.473119  992635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:29:37.563900  992635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:29:37.563987  992635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:29:37.568264  992635 start.go:563] Will wait 60s for crictl version
	I0120 12:29:37.568323  992635 ssh_runner.go:195] Run: which crictl
	I0120 12:29:37.571784  992635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:29:37.611833  992635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:29:37.611921  992635 ssh_runner.go:195] Run: crio --version
	I0120 12:29:37.638851  992635 ssh_runner.go:195] Run: crio --version
	I0120 12:29:37.666869  992635 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:29:37.668103  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetIP
	I0120 12:29:37.670631  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:37.670959  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:29:37.670992  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:29:37.671174  992635 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 12:29:37.675338  992635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:29:37.687605  992635 kubeadm.go:883] updating cluster {Name:embed-certs-987349 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-987349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:29:37.687739  992635 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:29:37.687786  992635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:29:37.721598  992635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:29:37.721667  992635 ssh_runner.go:195] Run: which lz4
	I0120 12:29:37.725546  992635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:29:37.729537  992635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:29:37.729567  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 12:29:39.003702  992635 crio.go:462] duration metric: took 1.278187433s to copy over tarball
	I0120 12:29:39.003768  992635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:29:41.081947  992635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.078129018s)
	I0120 12:29:41.081998  992635 crio.go:469] duration metric: took 2.078265526s to extract the tarball
	I0120 12:29:41.082011  992635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:29:41.118076  992635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:29:41.157652  992635 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:29:41.157676  992635 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:29:41.157685  992635 kubeadm.go:934] updating node { 192.168.72.170 8443 v1.32.0 crio true true} ...
	I0120 12:29:41.157785  992635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-987349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-987349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:29:41.157848  992635 ssh_runner.go:195] Run: crio config
	I0120 12:29:41.199910  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:29:41.199933  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:29:41.199944  992635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:29:41.199965  992635 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.170 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-987349 NodeName:embed-certs-987349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:29:41.200088  992635 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-987349"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.170"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.170"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:29:41.200150  992635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:29:41.209934  992635 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:29:41.209995  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:29:41.219163  992635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0120 12:29:41.234498  992635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:29:41.248763  992635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0120 12:29:41.263626  992635 ssh_runner.go:195] Run: grep 192.168.72.170	control-plane.minikube.internal$ /etc/hosts
	I0120 12:29:41.266976  992635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:29:41.277510  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:29:41.404311  992635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:29:41.420958  992635 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349 for IP: 192.168.72.170
	I0120 12:29:41.420977  992635 certs.go:194] generating shared ca certs ...
	I0120 12:29:41.420995  992635 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:41.421167  992635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:29:41.421216  992635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:29:41.421230  992635 certs.go:256] generating profile certs ...
	I0120 12:29:41.421329  992635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/client.key
	I0120 12:29:41.421415  992635 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/apiserver.key.fb70ae08
	I0120 12:29:41.421456  992635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/proxy-client.key
	I0120 12:29:41.421569  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:29:41.421613  992635 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:29:41.421627  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:29:41.421656  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:29:41.421679  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:29:41.421702  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:29:41.421739  992635 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:29:41.422301  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:29:41.474112  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:29:41.504253  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:29:41.529528  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:29:41.555229  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 12:29:41.588910  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:29:41.610537  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:29:41.631813  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/embed-certs-987349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:29:41.653692  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:29:41.675911  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:29:41.696912  992635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:29:41.717308  992635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:29:41.732004  992635 ssh_runner.go:195] Run: openssl version
	I0120 12:29:41.737123  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:29:41.746416  992635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:41.750172  992635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:41.750238  992635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:29:41.755378  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:29:41.765085  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:29:41.774785  992635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:29:41.778696  992635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:29:41.778729  992635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:29:41.783763  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:29:41.793280  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:29:41.802635  992635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:29:41.806596  992635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:29:41.806639  992635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:29:41.811505  992635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:29:41.820837  992635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:29:41.824725  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:29:41.829848  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:29:41.834945  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:29:41.839969  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:29:41.845072  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:29:41.850168  992635 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:29:41.855318  992635 kubeadm.go:392] StartCluster: {Name:embed-certs-987349 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-987349 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:29:41.855419  992635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:29:41.855465  992635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:29:41.888469  992635 cri.go:89] found id: ""
	I0120 12:29:41.888525  992635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:29:41.897148  992635 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:29:41.897168  992635 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:29:41.897206  992635 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:29:41.905492  992635 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:29:41.906134  992635 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-987349" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:29:41.906595  992635 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-987349" cluster setting kubeconfig missing "embed-certs-987349" context setting]
	I0120 12:29:41.907373  992635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:29:41.908810  992635 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:29:41.925051  992635 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.170
	I0120 12:29:41.925079  992635 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:29:41.925089  992635 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:29:41.925127  992635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:29:41.961894  992635 cri.go:89] found id: ""
	I0120 12:29:41.961947  992635 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:29:41.978720  992635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:29:41.987150  992635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:29:41.987169  992635 kubeadm.go:157] found existing configuration files:
	
	I0120 12:29:41.987216  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:29:41.995204  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:29:41.995255  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:29:42.003535  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:29:42.011395  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:29:42.011444  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:29:42.019446  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:29:42.027260  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:29:42.027298  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:29:42.035389  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:29:42.043224  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:29:42.043256  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:29:42.051466  992635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:29:42.059813  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:42.161280  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:43.339677  992635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.17835837s)
	I0120 12:29:43.339709  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:43.543206  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:43.617729  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:43.713763  992635 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:29:43.713863  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:44.214081  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:44.714084  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:45.214054  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:29:45.244563  992635 api_server.go:72] duration metric: took 1.530799512s to wait for apiserver process to appear ...
	I0120 12:29:45.244603  992635 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:29:45.244629  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:45.245205  992635 api_server.go:269] stopped: https://192.168.72.170:8443/healthz: Get "https://192.168.72.170:8443/healthz": dial tcp 192.168.72.170:8443: connect: connection refused
	I0120 12:29:45.744859  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:47.811935  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:29:47.811968  992635 api_server.go:103] status: https://192.168.72.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:29:47.811984  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:47.828386  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:29:47.828415  992635 api_server.go:103] status: https://192.168.72.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:29:48.244949  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:48.249458  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:29:48.249483  992635 api_server.go:103] status: https://192.168.72.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:29:48.745124  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:48.755938  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:29:48.755969  992635 api_server.go:103] status: https://192.168.72.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:29:49.245673  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:29:49.251059  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 200:
	ok
	I0120 12:29:49.257602  992635 api_server.go:141] control plane version: v1.32.0
	I0120 12:29:49.257629  992635 api_server.go:131] duration metric: took 4.013018765s to wait for apiserver health ...
	I0120 12:29:49.257639  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:29:49.257644  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:29:49.259239  992635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:29:49.260720  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:29:49.273768  992635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:29:49.292666  992635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:29:49.304620  992635 system_pods.go:59] 8 kube-system pods found
	I0120 12:29:49.304653  992635 system_pods.go:61] "coredns-668d6bf9bc-ccwj2" [bdcbb870-7637-465e-bc5a-5d7cfa8fe0f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:29:49.304665  992635 system_pods.go:61] "etcd-embed-certs-987349" [3161bfc7-2f57-4b39-ab0b-3d46fc7da03c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:29:49.304678  992635 system_pods.go:61] "kube-apiserver-embed-certs-987349" [2cd4f8ce-a354-4675-8abb-c16eafb72559] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:29:49.304688  992635 system_pods.go:61] "kube-controller-manager-embed-certs-987349" [292dcc26-57ee-44e8-aaf6-f197c6581aa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:29:49.304700  992635 system_pods.go:61] "kube-proxy-b6jvx" [faa8a0e9-65a4-462e-b7c8-a84ea201e03b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 12:29:49.304716  992635 system_pods.go:61] "kube-scheduler-embed-certs-987349" [4ddf45ec-fd2c-4f11-a5d9-b915b3186557] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:29:49.304728  992635 system_pods.go:61] "metrics-server-f79f97bbb-shgd4" [5b11bf28-dfce-4a53-9c92-fd3d0456ed56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:29:49.304738  992635 system_pods.go:61] "storage-provisioner" [ee39b816-d314-4121-8127-618b891f6168] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:29:49.304746  992635 system_pods.go:74] duration metric: took 12.061398ms to wait for pod list to return data ...
	I0120 12:29:49.304761  992635 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:29:49.309222  992635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:29:49.309251  992635 node_conditions.go:123] node cpu capacity is 2
	I0120 12:29:49.309267  992635 node_conditions.go:105] duration metric: took 4.497952ms to run NodePressure ...
	I0120 12:29:49.309292  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:29:49.592706  992635 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:29:49.596904  992635 kubeadm.go:739] kubelet initialised
	I0120 12:29:49.596935  992635 kubeadm.go:740] duration metric: took 4.197712ms waiting for restarted kubelet to initialise ...
	I0120 12:29:49.596947  992635 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:29:49.601528  992635 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-ccwj2" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:51.612452  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-ccwj2" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:54.108272  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-ccwj2" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:54.108301  992635 pod_ready.go:82] duration metric: took 4.506741962s for pod "coredns-668d6bf9bc-ccwj2" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:54.108311  992635 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:56.115189  992635 pod_ready.go:103] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:58.118591  992635 pod_ready.go:103] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:58.614999  992635 pod_ready.go:93] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:29:58.615028  992635 pod_ready.go:82] duration metric: took 4.506710379s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:29:58.615039  992635 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:00.121472  992635 pod_ready.go:93] pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:00.121504  992635 pod_ready.go:82] duration metric: took 1.506457547s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:00.121517  992635 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:02.128357  992635 pod_ready.go:103] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:04.627842  992635 pod_ready.go:103] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:05.127435  992635 pod_ready.go:93] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:05.127470  992635 pod_ready.go:82] duration metric: took 5.005944231s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:05.127484  992635 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b6jvx" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:05.131559  992635 pod_ready.go:93] pod "kube-proxy-b6jvx" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:05.131580  992635 pod_ready.go:82] duration metric: took 4.089169ms for pod "kube-proxy-b6jvx" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:05.131588  992635 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:05.135431  992635 pod_ready.go:93] pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:05.135451  992635 pod_ready.go:82] duration metric: took 3.856322ms for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:05.135463  992635 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:07.141497  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:09.142285  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:11.142360  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:13.641686  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:15.642404  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:17.642513  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:20.142794  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:22.640783  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:24.640963  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:26.642349  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:29.142499  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:31.643068  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:34.143062  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:36.643053  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:38.643091  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:41.142810  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:43.642514  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:46.142322  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:48.642769  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:51.141991  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:53.143521  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:55.641633  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:57.642732  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:00.141876  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:02.642076  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:05.141925  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:07.641120  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.142764  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.642593  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:15.141471  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.141620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.141727  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.142012  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.142601  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.641748  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:27.642107  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.141452  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.142854  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.642634  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:37.142882  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:39.144316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.165382  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.642362  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:46.140694  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:48.141706  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.641289  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.641982  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.643173  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.142153  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.640970  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.641596  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:03.644442  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.140708  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.142135  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.142823  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:12.641948  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.141465  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.641252  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:19.642645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.140826  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.141192  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.641799  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:28.642020  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.141741  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:33.142049  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.142316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:37.641649  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.140763  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.141742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:44.641564  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.642075  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.642162  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.142915  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.643750  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.141402  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.642200  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:01.142620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:03.642811  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:06.141374  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:08.141896  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:10.642250  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.142031  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:15.642742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.142112  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.642242  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:23.141922  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:25.142300  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:27.642074  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:30.142170  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:32.641645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.141557  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.141719  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:39.142612  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.145567  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:43.642109  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:45.643138  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:48.141455  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:50.142912  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:52.642467  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.143921  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:57.641818  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.141288  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.142885  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:04.642669  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:05.136388  992635 pod_ready.go:82] duration metric: took 4m0.000888072s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:05.136424  992635 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:05.136487  992635 pod_ready.go:39] duration metric: took 4m15.539523942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:05.136548  992635 kubeadm.go:597] duration metric: took 4m23.239372129s to restartPrimaryControlPlane
	W0120 12:34:05.136646  992635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:05.136701  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:32.776819  992635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.640090134s)
	I0120 12:34:32.776911  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:34:32.792110  992635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:34:32.801453  992635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:34:32.809836  992635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:34:32.809855  992635 kubeadm.go:157] found existing configuration files:
	
	I0120 12:34:32.809892  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:34:32.817968  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:34:32.818014  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:34:32.826142  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:34:32.834058  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:34:32.834109  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:34:32.842776  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.850601  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:34:32.850645  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.858854  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:34:32.866819  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:34:32.866860  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:34:32.875193  992635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:34:32.920522  992635 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:34:32.920570  992635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:34:33.023871  992635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:34:33.024001  992635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:34:33.024134  992635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:34:33.032806  992635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:34:33.035443  992635 out.go:235]   - Generating certificates and keys ...
	I0120 12:34:33.035549  992635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:34:33.035644  992635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:34:33.035776  992635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:34:33.035886  992635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:34:33.035993  992635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:34:33.036086  992635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:34:33.037424  992635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:34:33.037490  992635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:34:33.037563  992635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:34:33.037649  992635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:34:33.037695  992635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:34:33.037750  992635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:34:33.105282  992635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:34:33.414668  992635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:34:33.727680  992635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:34:33.812741  992635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:34:33.984459  992635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:34:33.985140  992635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:34:33.988084  992635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:34:33.990145  992635 out.go:235]   - Booting up control plane ...
	I0120 12:34:33.990278  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:34:33.990399  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:34:33.990496  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:34:34.010394  992635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:34:34.017815  992635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:34:34.017877  992635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:34:34.137419  992635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:34:34.137546  992635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:34:35.139769  992635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002196985s
	I0120 12:34:35.139867  992635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:34:39.641165  992635 kubeadm.go:310] [api-check] The API server is healthy after 4.501397328s
	I0120 12:34:39.658614  992635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:34:40.171926  992635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:34:40.198719  992635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:34:40.198914  992635 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-987349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:34:40.207929  992635 kubeadm.go:310] [bootstrap-token] Using token: n4uhes.3ig136bhcqw1unce
	I0120 12:34:40.209373  992635 out.go:235]   - Configuring RBAC rules ...
	I0120 12:34:40.209504  992635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:34:40.213198  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:34:40.219884  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:34:40.223154  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:34:40.228539  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:34:40.232011  992635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:34:40.369420  992635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:34:40.817626  992635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:34:41.370167  992635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:34:41.371275  992635 kubeadm.go:310] 
	I0120 12:34:41.371411  992635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:34:41.371436  992635 kubeadm.go:310] 
	I0120 12:34:41.371547  992635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:34:41.371567  992635 kubeadm.go:310] 
	I0120 12:34:41.371607  992635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:34:41.371696  992635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:34:41.371776  992635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:34:41.371785  992635 kubeadm.go:310] 
	I0120 12:34:41.371870  992635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:34:41.371879  992635 kubeadm.go:310] 
	I0120 12:34:41.371946  992635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:34:41.371956  992635 kubeadm.go:310] 
	I0120 12:34:41.372030  992635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:34:41.372156  992635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:34:41.372262  992635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:34:41.372278  992635 kubeadm.go:310] 
	I0120 12:34:41.372392  992635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:34:41.372498  992635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:34:41.372507  992635 kubeadm.go:310] 
	I0120 12:34:41.372606  992635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.372783  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:34:41.372829  992635 kubeadm.go:310] 	--control-plane 
	I0120 12:34:41.372852  992635 kubeadm.go:310] 
	I0120 12:34:41.372972  992635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:34:41.372985  992635 kubeadm.go:310] 
	I0120 12:34:41.373076  992635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.373204  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:34:41.373662  992635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:34:41.373689  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:34:41.373703  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:34:41.375374  992635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:34:41.376667  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:34:41.387591  992635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:34:41.405656  992635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:34:41.405748  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.405779  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-987349 minikube.k8s.io/updated_at=2025_01_20T12_34_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-987349 minikube.k8s.io/primary=true
	I0120 12:34:41.445579  992635 ops.go:34] apiserver oom_adj: -16
	I0120 12:34:41.593723  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:42.093899  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:42.593991  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.093847  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.594692  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.094458  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.594425  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.094074  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.201304  992635 kubeadm.go:1113] duration metric: took 3.795623962s to wait for elevateKubeSystemPrivileges
	I0120 12:34:45.201350  992635 kubeadm.go:394] duration metric: took 5m3.346037476s to StartCluster
	I0120 12:34:45.201376  992635 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.201474  992635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:34:45.204831  992635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.205103  992635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:34:45.205287  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:34:45.205236  992635 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:34:45.205342  992635 addons.go:69] Setting dashboard=true in profile "embed-certs-987349"
	I0120 12:34:45.205370  992635 addons.go:238] Setting addon dashboard=true in "embed-certs-987349"
	I0120 12:34:45.205355  992635 addons.go:69] Setting default-storageclass=true in profile "embed-certs-987349"
	I0120 12:34:45.205338  992635 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-987349"
	I0120 12:34:45.205375  992635 addons.go:69] Setting metrics-server=true in profile "embed-certs-987349"
	I0120 12:34:45.205395  992635 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-987349"
	W0120 12:34:45.205403  992635 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:34:45.205413  992635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-987349"
	I0120 12:34:45.205443  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205383  992635 addons.go:247] addon dashboard should already be in state true
	I0120 12:34:45.205402  992635 addons.go:238] Setting addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:45.205522  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205537  992635 addons.go:247] addon metrics-server should already be in state true
	I0120 12:34:45.205585  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.205843  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205869  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205889  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205900  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205939  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205984  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205987  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.206010  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.206677  992635 out.go:177] * Verifying Kubernetes components...
	I0120 12:34:45.208137  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:34:45.222507  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0120 12:34:45.222862  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0120 12:34:45.223151  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.223444  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0120 12:34:45.223795  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.223818  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.223841  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.224249  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224372  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.224394  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.224716  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0120 12:34:45.224739  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224840  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.224881  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225063  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225306  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.225342  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225362  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225864  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.225848  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.226299  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226361  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226579  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.226996  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.227044  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.230457  992635 addons.go:238] Setting addon default-storageclass=true in "embed-certs-987349"
	W0120 12:34:45.230485  992635 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:34:45.230516  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.230928  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.230994  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.245536  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0120 12:34:45.246137  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.246774  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.246800  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.246874  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0120 12:34:45.247488  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.247514  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247491  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0120 12:34:45.247884  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247991  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.248377  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248398  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.248650  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248676  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.249046  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249050  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249260  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.249453  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.250058  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.250219  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0120 12:34:45.250876  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.251417  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.251442  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.251975  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.252485  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.252527  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.252582  992635 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:34:45.252806  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253386  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253969  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:34:45.253998  992635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:34:45.254019  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.254034  992635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:34:45.254933  992635 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:34:45.255880  992635 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.255900  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:34:45.255918  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.258271  992635 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:34:45.258378  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.258973  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.259074  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.259447  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.259546  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:34:45.259555  992635 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:34:45.259566  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.259650  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.260023  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.260165  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.260401  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.260819  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.260837  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.261018  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.261123  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.261371  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.261498  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.263039  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263451  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.263466  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263718  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.263876  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.264027  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.264247  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.271639  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0120 12:34:45.272049  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.272492  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.272506  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.272861  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.273045  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.275220  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.275411  992635 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.275425  992635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:34:45.275441  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.278031  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278264  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.278284  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278459  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.278651  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.278797  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.278940  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.485223  992635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:34:45.512129  992635 node_ready.go:35] waiting up to 6m0s for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535766  992635 node_ready.go:49] node "embed-certs-987349" has status "Ready":"True"
	I0120 12:34:45.535800  992635 node_ready.go:38] duration metric: took 23.637811ms for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535816  992635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:45.546936  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:45.591884  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.672669  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:34:45.672696  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:34:45.706505  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:34:45.706552  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:34:45.719651  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:34:45.719685  992635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:34:45.797607  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.912193  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.912228  992635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:34:45.919037  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:34:45.919066  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:34:45.995504  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.999745  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:34:45.999769  992635 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:34:46.012312  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012340  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.012774  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.012805  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.012815  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012824  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.013169  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.013179  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.013190  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.039766  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.039787  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.040079  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.040141  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.040161  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.060472  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:34:46.060499  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:34:46.125182  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:34:46.125209  992635 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:34:46.163864  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:34:46.163897  992635 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:34:46.271512  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:34:46.271542  992635 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:34:46.315589  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:34:46.315615  992635 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:34:46.382800  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:46.382834  992635 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:34:46.471318  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:47.146418  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.348766384s)
	I0120 12:34:47.146477  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146493  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.146889  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.146910  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.146920  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146928  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.148865  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.148875  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.148885  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375249  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.379691916s)
	I0120 12:34:47.375330  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375349  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375787  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.375817  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375827  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375835  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375855  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.376085  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.376105  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.376121  992635 addons.go:479] Verifying addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:47.554735  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.098046  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626653683s)
	I0120 12:34:48.098124  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098144  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098568  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098628  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.098648  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098651  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:48.098663  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098945  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098958  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.100362  992635 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-987349 addons enable metrics-server
	
	I0120 12:34:48.101744  992635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:34:48.102973  992635 addons.go:514] duration metric: took 2.897750546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:34:50.054643  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:52.555092  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.555118  992635 pod_ready.go:82] duration metric: took 7.008153036s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.555129  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559701  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.559730  992635 pod_ready.go:82] duration metric: took 4.593756ms for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559743  992635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564650  992635 pod_ready.go:93] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.564677  992635 pod_ready.go:82] duration metric: took 4.924851ms for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564690  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568924  992635 pod_ready.go:93] pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.568947  992635 pod_ready.go:82] duration metric: took 4.248574ms for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568959  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573555  992635 pod_ready.go:93] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.573574  992635 pod_ready.go:82] duration metric: took 4.607213ms for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573582  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951750  992635 pod_ready.go:93] pod "kube-proxy-xrg5x" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.951777  992635 pod_ready.go:82] duration metric: took 378.189084ms for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951787  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352358  992635 pod_ready.go:93] pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:53.352397  992635 pod_ready.go:82] duration metric: took 400.600706ms for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352410  992635 pod_ready.go:39] duration metric: took 7.816579945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:53.352431  992635 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:53.352497  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:53.385445  992635 api_server.go:72] duration metric: took 8.18029522s to wait for apiserver process to appear ...
	I0120 12:34:53.385483  992635 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:53.385512  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:34:53.390273  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 200:
	ok
	I0120 12:34:53.391546  992635 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:53.391569  992635 api_server.go:131] duration metric: took 6.078483ms to wait for apiserver health ...
	I0120 12:34:53.391576  992635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:53.555192  992635 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:53.555222  992635 system_pods.go:61] "coredns-668d6bf9bc-cf5ts" [91648c6f-7cef-427f-82f3-7572a9b5d80e] Running
	I0120 12:34:53.555227  992635 system_pods.go:61] "coredns-668d6bf9bc-gr6pw" [6ff16a87-0a5e-4d82-b13d-2c72afef6dc0] Running
	I0120 12:34:53.555231  992635 system_pods.go:61] "etcd-embed-certs-987349" [5a54b1fe-f8d1-43c6-a430-a37fa3fa04b7] Running
	I0120 12:34:53.555235  992635 system_pods.go:61] "kube-apiserver-embed-certs-987349" [3e1da80d-0a1d-44bb-945d-534b91eebb95] Running
	I0120 12:34:53.555241  992635 system_pods.go:61] "kube-controller-manager-embed-certs-987349" [e1f4800a-ff08-4ea5-8134-81130f2d8f3d] Running
	I0120 12:34:53.555245  992635 system_pods.go:61] "kube-proxy-xrg5x" [a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7] Running
	I0120 12:34:53.555248  992635 system_pods.go:61] "kube-scheduler-embed-certs-987349" [d35e4dae-055f-4db7-b807-5767fa324498] Running
	I0120 12:34:53.555257  992635 system_pods.go:61] "metrics-server-f79f97bbb-4vcgc" [2108ac96-d8cd-429f-ac2d-babc6d97267b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:53.555262  992635 system_pods.go:61] "storage-provisioner" [953b33a8-d2a0-447d-a01b-49350c6555f7] Running
	I0120 12:34:53.555270  992635 system_pods.go:74] duration metric: took 163.687709ms to wait for pod list to return data ...
	I0120 12:34:53.555281  992635 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:53.753014  992635 default_sa.go:45] found service account: "default"
	I0120 12:34:53.753053  992635 default_sa.go:55] duration metric: took 197.764358ms for default service account to be created ...
	I0120 12:34:53.753066  992635 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:53.953127  992635 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-987349 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987349 -n embed-certs-987349
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-987349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-987349 logs -n 25: (1.769637965s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-496524                  | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981597  | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:30 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-987349                 | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC | 20 Jan 25 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-134433        | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981597       | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC | 20 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC |                     |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-134433             | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:54 UTC | 20 Jan 25 12:55 UTC |
	| start   | -p newest-cni-476001 --memory=2200 --alsologtostderr   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:55 UTC |
	| start   | -p auto-816069 --memory=3072                           | auto-816069                  | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:56 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-476001             | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-476001                                   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-476001                  | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-476001 --memory=2200 --alsologtostderr   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| ssh     | -p auto-816069 pgrep -a                                | auto-816069                  | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| image   | newest-cni-476001 image list                           | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-476001                                   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-476001                                   | newest-cni-476001            | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:56:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:56:01.739989 1000065 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:56:01.740101 1000065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:56:01.740114 1000065 out.go:358] Setting ErrFile to fd 2...
	I0120 12:56:01.740121 1000065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:56:01.740296 1000065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:56:01.740838 1000065 out.go:352] Setting JSON to false
	I0120 12:56:01.741886 1000065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20305,"bootTime":1737357457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:56:01.741996 1000065 start.go:139] virtualization: kvm guest
	I0120 12:56:01.769274 1000065 out.go:177] * [newest-cni-476001] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:56:01.851932 1000065 notify.go:220] Checking for updates...
	I0120 12:56:01.934915 1000065 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:56:02.019943 1000065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:56:02.084669 1000065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:56:02.086223 1000065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:56:02.087516 1000065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:56:02.088641 1000065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:56:02.090409 1000065 config.go:182] Loaded profile config "newest-cni-476001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:02.091143 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:02.091211 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:02.108208 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35257
	I0120 12:56:02.108844 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:02.109507 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:02.109537 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:02.109947 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:02.110145 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:02.110406 1000065 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:56:02.110780 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:02.110836 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:02.127405 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0120 12:56:02.127905 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:02.128441 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:02.128468 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:02.128913 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:02.129090 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:02.165584 1000065 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:56:02.166882 1000065 start.go:297] selected driver: kvm2
	I0120 12:56:02.166904 1000065 start.go:901] validating driver "kvm2" against &{Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:56:02.167065 1000065 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:56:02.167867 1000065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:56:02.167934 1000065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:56:02.182629 1000065 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:56:02.183029 1000065 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 12:56:02.183071 1000065 cni.go:84] Creating CNI manager for ""
	I0120 12:56:02.183134 1000065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:56:02.183196 1000065 start.go:340] cluster config:
	{Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:56:02.183322 1000065 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:56:02.185030 1000065 out.go:177] * Starting "newest-cni-476001" primary control-plane node in "newest-cni-476001" cluster
	I0120 12:56:02.186211 1000065 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:56:02.186248 1000065 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:56:02.186257 1000065 cache.go:56] Caching tarball of preloaded images
	I0120 12:56:02.186348 1000065 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:56:02.186362 1000065 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:56:02.186454 1000065 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/config.json ...
	I0120 12:56:02.186677 1000065 start.go:360] acquireMachinesLock for newest-cni-476001: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:56:02.186723 1000065 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "newest-cni-476001"
	I0120 12:56:02.186740 1000065 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:56:02.186751 1000065 fix.go:54] fixHost starting: 
	I0120 12:56:02.189188 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:02.189241 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:02.203974 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0120 12:56:02.204360 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:02.204786 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:02.204806 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:02.205154 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:02.205337 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:02.205511 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:02.207183 1000065 fix.go:112] recreateIfNeeded on newest-cni-476001: state=Stopped err=<nil>
	I0120 12:56:02.207210 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	W0120 12:56:02.207406 1000065 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:56:02.209175 1000065 out.go:177] * Restarting existing kvm2 VM for "newest-cni-476001" ...
	I0120 12:56:00.165373  999554 out.go:235]   - Generating certificates and keys ...
	I0120 12:56:00.165477  999554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:56:00.165552  999554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:56:00.171014  999554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:56:00.401367  999554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:56:00.474215  999554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:56:00.590051  999554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:56:00.699544  999554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:56:00.699683  999554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-816069 localhost] and IPs [192.168.61.139 127.0.0.1 ::1]
	I0120 12:56:00.910846  999554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:56:00.911014  999554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-816069 localhost] and IPs [192.168.61.139 127.0.0.1 ::1]
	I0120 12:56:00.995311  999554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:56:01.457228  999554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:56:01.609268  999554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:56:01.609332  999554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:56:01.693756  999554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:56:01.807305  999554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:56:01.967976  999554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:56:02.099762  999554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:56:02.283828  999554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:56:02.284479  999554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:56:02.287640  999554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:56:02.289074  999554 out.go:235]   - Booting up control plane ...
	I0120 12:56:02.289226  999554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:56:02.289353  999554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:56:02.289475  999554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:56:02.309158  999554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:56:02.318609  999554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:56:02.318684  999554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:56:02.449228  999554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:56:02.449345  999554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:56:02.210357 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Start
	I0120 12:56:02.210759 1000065 main.go:141] libmachine: (newest-cni-476001) starting domain...
	I0120 12:56:02.210779 1000065 main.go:141] libmachine: (newest-cni-476001) ensuring networks are active...
	I0120 12:56:02.211656 1000065 main.go:141] libmachine: (newest-cni-476001) Ensuring network default is active
	I0120 12:56:02.212118 1000065 main.go:141] libmachine: (newest-cni-476001) Ensuring network mk-newest-cni-476001 is active
	I0120 12:56:02.212572 1000065 main.go:141] libmachine: (newest-cni-476001) getting domain XML...
	I0120 12:56:02.213683 1000065 main.go:141] libmachine: (newest-cni-476001) creating domain...
	I0120 12:56:03.482883 1000065 main.go:141] libmachine: (newest-cni-476001) waiting for IP...
	I0120 12:56:03.483806 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:03.484256 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:03.484364 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:03.484238 1000101 retry.go:31] will retry after 258.595064ms: waiting for domain to come up
	I0120 12:56:03.744913 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:03.745560 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:03.745600 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:03.745495 1000101 retry.go:31] will retry after 239.905973ms: waiting for domain to come up
	I0120 12:56:03.987230 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:03.987832 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:03.987906 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:03.987812 1000101 retry.go:31] will retry after 456.489794ms: waiting for domain to come up
	I0120 12:56:04.446157 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:04.446824 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:04.446858 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:04.446797 1000101 retry.go:31] will retry after 533.613885ms: waiting for domain to come up
	I0120 12:56:04.982647 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:04.983299 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:04.983334 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:04.983259 1000101 retry.go:31] will retry after 493.234684ms: waiting for domain to come up
	I0120 12:56:05.477962 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:05.478542 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:05.478578 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:05.478478 1000101 retry.go:31] will retry after 784.808447ms: waiting for domain to come up
	I0120 12:56:06.265230 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:06.265716 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:06.265757 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:06.265663 1000101 retry.go:31] will retry after 873.647521ms: waiting for domain to come up
	I0120 12:56:03.449820  999554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00129953s
	I0120 12:56:03.449897  999554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:56:08.451127  999554 kubeadm.go:310] [api-check] The API server is healthy after 5.002274337s
	I0120 12:56:08.472789  999554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:56:08.486659  999554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:56:08.521295  999554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:56:08.521570  999554 kubeadm.go:310] [mark-control-plane] Marking the node auto-816069 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:56:08.537780  999554 kubeadm.go:310] [bootstrap-token] Using token: xvxl11.vdgatec78cqvv13q
	I0120 12:56:08.539032  999554 out.go:235]   - Configuring RBAC rules ...
	I0120 12:56:08.539196  999554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:56:08.549625  999554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:56:08.559968  999554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:56:08.563792  999554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:56:08.571524  999554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:56:08.578966  999554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:56:08.859882  999554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:56:09.286572  999554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:56:09.858236  999554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:56:09.858262  999554 kubeadm.go:310] 
	I0120 12:56:09.858344  999554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:56:09.858358  999554 kubeadm.go:310] 
	I0120 12:56:09.858508  999554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:56:09.858550  999554 kubeadm.go:310] 
	I0120 12:56:09.858594  999554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:56:09.858651  999554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:56:09.858699  999554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:56:09.858706  999554 kubeadm.go:310] 
	I0120 12:56:09.858770  999554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:56:09.858779  999554 kubeadm.go:310] 
	I0120 12:56:09.858850  999554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:56:09.858859  999554 kubeadm.go:310] 
	I0120 12:56:09.858934  999554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:56:09.859051  999554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:56:09.859168  999554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:56:09.859182  999554 kubeadm.go:310] 
	I0120 12:56:09.859307  999554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:56:09.859429  999554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:56:09.859439  999554 kubeadm.go:310] 
	I0120 12:56:09.859562  999554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xvxl11.vdgatec78cqvv13q \
	I0120 12:56:09.859713  999554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:56:09.859753  999554 kubeadm.go:310] 	--control-plane 
	I0120 12:56:09.859763  999554 kubeadm.go:310] 
	I0120 12:56:09.859887  999554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:56:09.859896  999554 kubeadm.go:310] 
	I0120 12:56:09.859998  999554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xvxl11.vdgatec78cqvv13q \
	I0120 12:56:09.860155  999554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:56:09.860399  999554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:56:09.860596  999554 cni.go:84] Creating CNI manager for ""
	I0120 12:56:09.860611  999554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:56:09.862306  999554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:56:07.141103 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:07.141640 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:07.141666 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:07.141594 1000101 retry.go:31] will retry after 1.208198715s: waiting for domain to come up
	I0120 12:56:08.350942 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:08.351543 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:08.351583 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:08.351536 1000101 retry.go:31] will retry after 1.487198386s: waiting for domain to come up
	I0120 12:56:09.840199 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:09.840830 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:09.840863 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:09.840789 1000101 retry.go:31] will retry after 2.181675852s: waiting for domain to come up
	I0120 12:56:09.863456  999554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:56:09.874431  999554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:56:09.893638  999554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:56:09.893722  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:09.893760  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-816069 minikube.k8s.io/updated_at=2025_01_20T12_56_09_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=auto-816069 minikube.k8s.io/primary=true
	I0120 12:56:09.908177  999554 ops.go:34] apiserver oom_adj: -16
	I0120 12:56:10.046591  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:10.546711  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:11.046655  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:11.547445  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:12.046716  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:12.547537  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:13.046688  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:13.547701  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:14.046715  999554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:56:14.254736  999554 kubeadm.go:1113] duration metric: took 4.361083637s to wait for elevateKubeSystemPrivileges
	I0120 12:56:14.254777  999554 kubeadm.go:394] duration metric: took 14.57331142s to StartCluster
	I0120 12:56:14.254802  999554 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:14.254886  999554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:56:14.257257  999554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:14.257560  999554 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.139 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:56:14.257636  999554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:56:14.257961  999554 config.go:182] Loaded profile config "auto-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:14.257981  999554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:56:14.258106  999554 addons.go:69] Setting storage-provisioner=true in profile "auto-816069"
	I0120 12:56:14.258135  999554 addons.go:238] Setting addon storage-provisioner=true in "auto-816069"
	I0120 12:56:14.258171  999554 host.go:66] Checking if "auto-816069" exists ...
	I0120 12:56:14.258308  999554 addons.go:69] Setting default-storageclass=true in profile "auto-816069"
	I0120 12:56:14.258331  999554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-816069"
	I0120 12:56:14.258864  999554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:14.258898  999554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:14.258970  999554 out.go:177] * Verifying Kubernetes components...
	I0120 12:56:14.259501  999554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:14.259534  999554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:14.260401  999554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:56:14.279760  999554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0120 12:56:14.279761  999554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0120 12:56:14.280296  999554 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:14.280610  999554 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:14.280951  999554 main.go:141] libmachine: Using API Version  1
	I0120 12:56:14.280979  999554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:14.281106  999554 main.go:141] libmachine: Using API Version  1
	I0120 12:56:14.281122  999554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:14.281441  999554 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:14.281717  999554 main.go:141] libmachine: (auto-816069) Calling .GetState
	I0120 12:56:14.281769  999554 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:14.282326  999554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:14.282372  999554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:14.286171  999554 addons.go:238] Setting addon default-storageclass=true in "auto-816069"
	I0120 12:56:14.286214  999554 host.go:66] Checking if "auto-816069" exists ...
	I0120 12:56:14.286598  999554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:14.286617  999554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:14.306243  999554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0120 12:56:14.306757  999554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0120 12:56:14.306927  999554 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:14.307184  999554 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:14.307550  999554 main.go:141] libmachine: Using API Version  1
	I0120 12:56:14.307568  999554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:14.307688  999554 main.go:141] libmachine: Using API Version  1
	I0120 12:56:14.307697  999554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:14.307985  999554 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:14.308034  999554 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:14.308549  999554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:14.308570  999554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:14.308762  999554 main.go:141] libmachine: (auto-816069) Calling .GetState
	I0120 12:56:14.315482  999554 main.go:141] libmachine: (auto-816069) Calling .DriverName
	I0120 12:56:14.318592  999554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:56:14.321005  999554 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:56:14.321029  999554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:56:14.321051  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHHostname
	I0120 12:56:14.325407  999554 main.go:141] libmachine: (auto-816069) DBG | domain auto-816069 has defined MAC address 52:54:00:ed:2a:18 in network mk-auto-816069
	I0120 12:56:14.325904  999554 main.go:141] libmachine: (auto-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:2a:18", ip: ""} in network mk-auto-816069: {Iface:virbr3 ExpiryTime:2025-01-20 13:55:42 +0000 UTC Type:0 Mac:52:54:00:ed:2a:18 Iaid: IPaddr:192.168.61.139 Prefix:24 Hostname:auto-816069 Clientid:01:52:54:00:ed:2a:18}
	I0120 12:56:14.325933  999554 main.go:141] libmachine: (auto-816069) DBG | domain auto-816069 has defined IP address 192.168.61.139 and MAC address 52:54:00:ed:2a:18 in network mk-auto-816069
	I0120 12:56:14.328581  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHPort
	I0120 12:56:14.328791  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHKeyPath
	I0120 12:56:14.331323  999554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0120 12:56:14.331328  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHUsername
	I0120 12:56:14.331545  999554 sshutil.go:53] new ssh client: &{IP:192.168.61.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/auto-816069/id_rsa Username:docker}
	I0120 12:56:14.332266  999554 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:14.332899  999554 main.go:141] libmachine: Using API Version  1
	I0120 12:56:14.332915  999554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:14.333367  999554 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:14.333601  999554 main.go:141] libmachine: (auto-816069) Calling .GetState
	I0120 12:56:14.335951  999554 main.go:141] libmachine: (auto-816069) Calling .DriverName
	I0120 12:56:14.338773  999554 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:56:14.338789  999554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:56:14.338807  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHHostname
	I0120 12:56:14.342546  999554 main.go:141] libmachine: (auto-816069) DBG | domain auto-816069 has defined MAC address 52:54:00:ed:2a:18 in network mk-auto-816069
	I0120 12:56:14.343017  999554 main.go:141] libmachine: (auto-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:2a:18", ip: ""} in network mk-auto-816069: {Iface:virbr3 ExpiryTime:2025-01-20 13:55:42 +0000 UTC Type:0 Mac:52:54:00:ed:2a:18 Iaid: IPaddr:192.168.61.139 Prefix:24 Hostname:auto-816069 Clientid:01:52:54:00:ed:2a:18}
	I0120 12:56:14.343034  999554 main.go:141] libmachine: (auto-816069) DBG | domain auto-816069 has defined IP address 192.168.61.139 and MAC address 52:54:00:ed:2a:18 in network mk-auto-816069
	I0120 12:56:14.343333  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHPort
	I0120 12:56:14.343520  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHKeyPath
	I0120 12:56:14.343665  999554 main.go:141] libmachine: (auto-816069) Calling .GetSSHUsername
	I0120 12:56:14.343882  999554 sshutil.go:53] new ssh client: &{IP:192.168.61.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/auto-816069/id_rsa Username:docker}
	I0120 12:56:14.767986  999554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:56:14.784507  999554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:56:14.800542  999554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:56:14.800823  999554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:56:15.704902  999554 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:15.704934  999554 main.go:141] libmachine: (auto-816069) Calling .Close
	I0120 12:56:15.704984  999554 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:15.705003  999554 main.go:141] libmachine: (auto-816069) Calling .Close
	I0120 12:56:15.705030  999554 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0120 12:56:15.705272  999554 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:15.705290  999554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:15.705300  999554 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:15.705309  999554 main.go:141] libmachine: (auto-816069) Calling .Close
	I0120 12:56:15.705381  999554 main.go:141] libmachine: (auto-816069) DBG | Closing plugin on server side
	I0120 12:56:15.705414  999554 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:15.705421  999554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:15.705428  999554 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:15.705434  999554 main.go:141] libmachine: (auto-816069) Calling .Close
	I0120 12:56:15.705662  999554 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:15.705679  999554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:15.706099  999554 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:15.706126  999554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:15.706583  999554 node_ready.go:35] waiting up to 15m0s for node "auto-816069" to be "Ready" ...
	I0120 12:56:15.721491  999554 node_ready.go:49] node "auto-816069" has status "Ready":"True"
	I0120 12:56:15.721519  999554 node_ready.go:38] duration metric: took 14.859122ms for node "auto-816069" to be "Ready" ...
	I0120 12:56:15.721534  999554 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:56:15.730342  999554 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:15.730366  999554 main.go:141] libmachine: (auto-816069) Calling .Close
	I0120 12:56:15.730696  999554 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:15.730714  999554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:15.730765  999554 main.go:141] libmachine: (auto-816069) DBG | Closing plugin on server side
	I0120 12:56:15.732190  999554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 12:56:12.023858 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:12.024456 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:12.024495 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:12.024407 1000101 retry.go:31] will retry after 2.551293437s: waiting for domain to come up
	I0120 12:56:14.577186 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:14.577788 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:14.577822 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:14.577766 1000101 retry.go:31] will retry after 2.624785482s: waiting for domain to come up
	I0120 12:56:15.733242  999554 addons.go:514] duration metric: took 1.475257562s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 12:56:15.735758  999554 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nbczt" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:16.209764  999554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-816069" context rescaled to 1 replicas
	I0120 12:56:16.742462  999554 pod_ready.go:93] pod "coredns-668d6bf9bc-nbczt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:16.742491  999554 pod_ready.go:82] duration metric: took 1.006712564s for pod "coredns-668d6bf9bc-nbczt" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:16.742502  999554 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:17.205405 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:17.205890 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | unable to find current IP address of domain newest-cni-476001 in network mk-newest-cni-476001
	I0120 12:56:17.205923 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | I0120 12:56:17.205850 1000101 retry.go:31] will retry after 3.259615851s: waiting for domain to come up
	I0120 12:56:20.467567 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.468056 1000065 main.go:141] libmachine: (newest-cni-476001) found domain IP: 192.168.50.124
	I0120 12:56:20.468073 1000065 main.go:141] libmachine: (newest-cni-476001) reserving static IP address...
	I0120 12:56:20.468086 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has current primary IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.468471 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "newest-cni-476001", mac: "52:54:00:3f:22:0b", ip: "192.168.50.124"} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.468494 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | skip adding static IP to network mk-newest-cni-476001 - found existing host DHCP lease matching {name: "newest-cni-476001", mac: "52:54:00:3f:22:0b", ip: "192.168.50.124"}
	I0120 12:56:20.468503 1000065 main.go:141] libmachine: (newest-cni-476001) reserved static IP address 192.168.50.124 for domain newest-cni-476001
	I0120 12:56:20.468516 1000065 main.go:141] libmachine: (newest-cni-476001) waiting for SSH...
	I0120 12:56:20.468529 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Getting to WaitForSSH function...
	I0120 12:56:20.470319 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.470576 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.470606 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.470737 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Using SSH client type: external
	I0120 12:56:20.470791 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa (-rw-------)
	I0120 12:56:20.470833 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:56:20.470853 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | About to run SSH command:
	I0120 12:56:20.470864 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | exit 0
	I0120 12:56:20.598001 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | SSH cmd err, output: <nil>: 
	I0120 12:56:20.598407 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetConfigRaw
	I0120 12:56:20.599086 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetIP
	I0120 12:56:20.601939 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.602298 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.602328 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.602563 1000065 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/config.json ...
	I0120 12:56:20.602746 1000065 machine.go:93] provisionDockerMachine start ...
	I0120 12:56:20.602764 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:20.602979 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:20.605085 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.605393 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.605424 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.605551 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:20.605720 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.605906 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.606060 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:20.606231 1000065 main.go:141] libmachine: Using SSH client type: native
	I0120 12:56:20.606423 1000065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0120 12:56:20.606436 1000065 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:56:20.726478 1000065 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:56:20.726512 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetMachineName
	I0120 12:56:20.726765 1000065 buildroot.go:166] provisioning hostname "newest-cni-476001"
	I0120 12:56:20.726790 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetMachineName
	I0120 12:56:20.726968 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:20.729513 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.729824 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.729851 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.729985 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:20.730259 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.730412 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.730594 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:20.730786 1000065 main.go:141] libmachine: Using SSH client type: native
	I0120 12:56:20.730996 1000065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0120 12:56:20.731010 1000065 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-476001 && echo "newest-cni-476001" | sudo tee /etc/hostname
	I0120 12:56:20.856994 1000065 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-476001
	
	I0120 12:56:20.857026 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:20.859941 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.860327 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.860363 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.860549 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:20.860741 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.860882 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:20.860987 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:20.861151 1000065 main.go:141] libmachine: Using SSH client type: native
	I0120 12:56:20.861330 1000065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0120 12:56:20.861346 1000065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-476001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-476001/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-476001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:56:20.982604 1000065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:56:20.982642 1000065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:56:20.982659 1000065 buildroot.go:174] setting up certificates
	I0120 12:56:20.982667 1000065 provision.go:84] configureAuth start
	I0120 12:56:20.982678 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetMachineName
	I0120 12:56:20.982950 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetIP
	I0120 12:56:20.986313 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.986685 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.986715 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.986846 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:20.988863 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.989198 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:20.989233 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:20.989339 1000065 provision.go:143] copyHostCerts
	I0120 12:56:20.989404 1000065 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:56:20.989427 1000065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:56:20.989485 1000065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:56:20.989591 1000065 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:56:20.989604 1000065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:56:20.989633 1000065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:56:20.989707 1000065 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:56:20.989717 1000065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:56:20.989756 1000065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:56:20.989836 1000065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.newest-cni-476001 san=[127.0.0.1 192.168.50.124 localhost minikube newest-cni-476001]
	I0120 12:56:21.062968 1000065 provision.go:177] copyRemoteCerts
	I0120 12:56:21.063020 1000065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:56:21.063043 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.065507 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.065797 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.065827 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.066044 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.066252 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.066417 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.066583 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:21.153293 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:56:21.181277 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 12:56:21.209123 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:56:21.235933 1000065 provision.go:87] duration metric: took 253.25119ms to configureAuth
	I0120 12:56:21.235964 1000065 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:56:21.236256 1000065 config.go:182] Loaded profile config "newest-cni-476001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:21.236366 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.240095 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.240520 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.240567 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.240766 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.240932 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.241098 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.241313 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.241537 1000065 main.go:141] libmachine: Using SSH client type: native
	I0120 12:56:21.241764 1000065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0120 12:56:21.241791 1000065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:56:21.494894 1000065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:56:21.494989 1000065 machine.go:96] duration metric: took 892.227319ms to provisionDockerMachine
	I0120 12:56:21.495011 1000065 start.go:293] postStartSetup for "newest-cni-476001" (driver="kvm2")
	I0120 12:56:21.495026 1000065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:56:21.495055 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:21.495465 1000065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:56:21.495507 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.498702 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.499041 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.499080 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.499272 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.499485 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.499691 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.499845 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:21.585715 1000065 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:56:21.589906 1000065 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:56:21.589936 1000065 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:56:21.590018 1000065 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:56:21.590161 1000065 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:56:21.590311 1000065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:56:21.600825 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:56:21.624435 1000065 start.go:296] duration metric: took 129.409308ms for postStartSetup
	I0120 12:56:21.624468 1000065 fix.go:56] duration metric: took 19.43771815s for fixHost
	I0120 12:56:21.624490 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.627221 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.627569 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.627604 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.627794 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.628008 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.628229 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.628394 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.628557 1000065 main.go:141] libmachine: Using SSH client type: native
	I0120 12:56:21.628759 1000065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0120 12:56:21.628783 1000065 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:56:21.738946 1000065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737377781.693775463
	
	I0120 12:56:21.738968 1000065 fix.go:216] guest clock: 1737377781.693775463
	I0120 12:56:21.738974 1000065 fix.go:229] Guest: 2025-01-20 12:56:21.693775463 +0000 UTC Remote: 2025-01-20 12:56:21.624471862 +0000 UTC m=+19.925823869 (delta=69.303601ms)
	I0120 12:56:21.738993 1000065 fix.go:200] guest clock delta is within tolerance: 69.303601ms
	I0120 12:56:21.739001 1000065 start.go:83] releasing machines lock for "newest-cni-476001", held for 19.552266451s
	I0120 12:56:21.739019 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:21.739249 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetIP
	I0120 12:56:18.940275  999554 pod_ready.go:103] pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace has status "Ready":"False"
	I0120 12:56:21.251195  999554 pod_ready.go:103] pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace has status "Ready":"False"
	I0120 12:56:21.749839  999554 pod_ready.go:98] pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.139 HostIPs:[{IP:192.168.61
.139}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-20 12:56:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-20 12:56:15 +0000 UTC,FinishedAt:2025-01-20 12:56:21 +0000 UTC,ContainerID:cri-o://f1bf9a54a6421f3566af28422ad44969b897505be940cebf91c18b1975ffec84,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1bf9a54a6421f3566af28422ad44969b897505be940cebf91c18b1975ffec84 Started:0xc002655430 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002150570} {Name:kube-api-access-k5th7 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002150580}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0120 12:56:21.749870  999554 pod_ready.go:82] duration metric: took 5.007361744s for pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace to be "Ready" ...
	E0120 12:56:21.749884  999554 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-qg6gx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-20 12:56:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.139 HostIPs:[{IP:192.168.61.139}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-20 12:56:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-20 12:56:15 +0000 UTC,FinishedAt:2025-01-20 12:56:21 +0000 UTC,ContainerID:cri-o://f1bf9a54a6421f3566af28422ad44969b897505be940cebf91c18b1975ffec84,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1bf9a54a6421f3566af28422ad44969b897505be940cebf91c18b1975ffec84 Started:0xc002655430 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002150570} {Name:kube-api-access-k5th7 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc002150580}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0120 12:56:21.749898  999554 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.755862  999554 pod_ready.go:93] pod "etcd-auto-816069" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:21.755882  999554 pod_ready.go:82] duration metric: took 5.937945ms for pod "etcd-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.755893  999554 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.760249  999554 pod_ready.go:93] pod "kube-apiserver-auto-816069" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:21.760275  999554 pod_ready.go:82] duration metric: took 4.374628ms for pod "kube-apiserver-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.760287  999554 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.764287  999554 pod_ready.go:93] pod "kube-controller-manager-auto-816069" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:21.764308  999554 pod_ready.go:82] duration metric: took 4.013587ms for pod "kube-controller-manager-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.764319  999554 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-98rpj" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.769406  999554 pod_ready.go:93] pod "kube-proxy-98rpj" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:21.769430  999554 pod_ready.go:82] duration metric: took 5.096911ms for pod "kube-proxy-98rpj" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:21.769479  999554 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:22.146041  999554 pod_ready.go:93] pod "kube-scheduler-auto-816069" in "kube-system" namespace has status "Ready":"True"
	I0120 12:56:22.146074  999554 pod_ready.go:82] duration metric: took 376.575484ms for pod "kube-scheduler-auto-816069" in "kube-system" namespace to be "Ready" ...
	I0120 12:56:22.146089  999554 pod_ready.go:39] duration metric: took 6.424531861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:56:22.146110  999554 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:56:22.146173  999554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:22.161274  999554 api_server.go:72] duration metric: took 7.903667598s to wait for apiserver process to appear ...
	I0120 12:56:22.161302  999554 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:56:22.161322  999554 api_server.go:253] Checking apiserver healthz at https://192.168.61.139:8443/healthz ...
	I0120 12:56:22.166115  999554 api_server.go:279] https://192.168.61.139:8443/healthz returned 200:
	ok
	I0120 12:56:22.167130  999554 api_server.go:141] control plane version: v1.32.0
	I0120 12:56:22.167155  999554 api_server.go:131] duration metric: took 5.844776ms to wait for apiserver health ...
	I0120 12:56:22.167166  999554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:56:22.347659  999554 system_pods.go:59] 7 kube-system pods found
	I0120 12:56:22.347695  999554 system_pods.go:61] "coredns-668d6bf9bc-nbczt" [21b0d99c-61bc-47ea-8d0b-c4e51f3b843b] Running
	I0120 12:56:22.347701  999554 system_pods.go:61] "etcd-auto-816069" [32e239d9-0738-42d1-8fd9-8709203c5b14] Running
	I0120 12:56:22.347705  999554 system_pods.go:61] "kube-apiserver-auto-816069" [ce00a14d-a1e7-41db-8a15-36248af808a3] Running
	I0120 12:56:22.347709  999554 system_pods.go:61] "kube-controller-manager-auto-816069" [b023a8b0-3461-4f72-9fcc-720c2ce4de00] Running
	I0120 12:56:22.347713  999554 system_pods.go:61] "kube-proxy-98rpj" [e97d4e59-67f8-4f5d-8766-71aa9ab98558] Running
	I0120 12:56:22.347716  999554 system_pods.go:61] "kube-scheduler-auto-816069" [54c073ad-e79c-4daa-9efb-24ab2449314b] Running
	I0120 12:56:22.347719  999554 system_pods.go:61] "storage-provisioner" [4fbd71f4-e6b9-435c-b599-830807b0fac9] Running
	I0120 12:56:22.347728  999554 system_pods.go:74] duration metric: took 180.554095ms to wait for pod list to return data ...
	I0120 12:56:22.347739  999554 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:56:22.546717  999554 default_sa.go:45] found service account: "default"
	I0120 12:56:22.546744  999554 default_sa.go:55] duration metric: took 198.996585ms for default service account to be created ...
	I0120 12:56:22.546756  999554 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:56:21.742032 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.742428 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.742466 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.742791 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:21.743294 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:21.743483 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:21.743575 1000065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:56:21.743629 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.743727 1000065 ssh_runner.go:195] Run: cat /version.json
	I0120 12:56:21.743757 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:21.746807 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.747073 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.747260 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.747306 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.747456 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.747625 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:21.747647 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:21.747689 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.747828 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:21.747886 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.748012 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:21.748076 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:21.748545 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:21.748749 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:21.831341 1000065 ssh_runner.go:195] Run: systemctl --version
	I0120 12:56:21.860495 1000065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:56:22.011044 1000065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:56:22.017173 1000065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:56:22.017238 1000065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:56:22.033167 1000065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:56:22.033192 1000065 start.go:495] detecting cgroup driver to use...
	I0120 12:56:22.033257 1000065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:56:22.050729 1000065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:56:22.064314 1000065 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:56:22.064365 1000065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:56:22.077150 1000065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:56:22.089854 1000065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:56:22.200990 1000065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:56:22.360408 1000065 docker.go:233] disabling docker service ...
	I0120 12:56:22.360487 1000065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:56:22.376206 1000065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:56:22.388919 1000065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:56:22.511812 1000065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:56:22.629522 1000065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:56:22.642788 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:56:22.660264 1000065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:56:22.660328 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.669478 1000065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:56:22.669529 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.678640 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.687901 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.696951 1000065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:56:22.706926 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.716653 1000065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.733200 1000065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:56:22.742612 1000065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:56:22.751528 1000065 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:56:22.751588 1000065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:56:22.762891 1000065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:56:22.772201 1000065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:56:22.889563 1000065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:56:22.983163 1000065 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:56:22.983248 1000065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:56:22.988624 1000065 start.go:563] Will wait 60s for crictl version
	I0120 12:56:22.988690 1000065 ssh_runner.go:195] Run: which crictl
	I0120 12:56:22.992182 1000065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:56:23.031479 1000065 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:56:23.031603 1000065 ssh_runner.go:195] Run: crio --version
	I0120 12:56:23.057584 1000065 ssh_runner.go:195] Run: crio --version
	I0120 12:56:23.085319 1000065 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:56:23.086682 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetIP
	I0120 12:56:23.089565 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:23.089950 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:23.089978 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:23.090236 1000065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:56:23.093923 1000065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:56:23.106991 1000065 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0120 12:56:22.747960  999554 system_pods.go:87] 7 kube-system pods found
	I0120 12:56:22.946699  999554 system_pods.go:105] "coredns-668d6bf9bc-nbczt" [21b0d99c-61bc-47ea-8d0b-c4e51f3b843b] Running
	I0120 12:56:22.946725  999554 system_pods.go:105] "etcd-auto-816069" [32e239d9-0738-42d1-8fd9-8709203c5b14] Running
	I0120 12:56:22.946734  999554 system_pods.go:105] "kube-apiserver-auto-816069" [ce00a14d-a1e7-41db-8a15-36248af808a3] Running
	I0120 12:56:22.946742  999554 system_pods.go:105] "kube-controller-manager-auto-816069" [b023a8b0-3461-4f72-9fcc-720c2ce4de00] Running
	I0120 12:56:22.946748  999554 system_pods.go:105] "kube-proxy-98rpj" [e97d4e59-67f8-4f5d-8766-71aa9ab98558] Running
	I0120 12:56:22.946755  999554 system_pods.go:105] "kube-scheduler-auto-816069" [54c073ad-e79c-4daa-9efb-24ab2449314b] Running
	I0120 12:56:22.946762  999554 system_pods.go:105] "storage-provisioner" [4fbd71f4-e6b9-435c-b599-830807b0fac9] Running
	I0120 12:56:22.946772  999554 system_pods.go:147] duration metric: took 400.008535ms to wait for k8s-apps to be running ...
	I0120 12:56:22.946784  999554 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:56:22.946844  999554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:56:22.960996  999554 system_svc.go:56] duration metric: took 14.201705ms WaitForService to wait for kubelet
	I0120 12:56:22.961028  999554 kubeadm.go:582] duration metric: took 8.703426333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:56:22.961053  999554 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:56:23.147338  999554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:56:23.147367  999554 node_conditions.go:123] node cpu capacity is 2
	I0120 12:56:23.147381  999554 node_conditions.go:105] duration metric: took 186.322519ms to run NodePressure ...
	I0120 12:56:23.147433  999554 start.go:241] waiting for startup goroutines ...
	I0120 12:56:23.147451  999554 start.go:246] waiting for cluster config update ...
	I0120 12:56:23.147467  999554 start.go:255] writing updated cluster config ...
	I0120 12:56:23.147780  999554 ssh_runner.go:195] Run: rm -f paused
	I0120 12:56:23.202588  999554 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:56:23.204388  999554 out.go:177] * Done! kubectl is now configured to use "auto-816069" cluster and "default" namespace by default
	I0120 12:56:23.108254 1000065 kubeadm.go:883] updating cluster {Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:56:23.108457 1000065 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:56:23.108542 1000065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:56:23.143952 1000065 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:56:23.144031 1000065 ssh_runner.go:195] Run: which lz4
	I0120 12:56:23.148902 1000065 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:56:23.154414 1000065 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:56:23.154446 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 12:56:24.419882 1000065 crio.go:462] duration metric: took 1.271019375s to copy over tarball
	I0120 12:56:24.419953 1000065 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:56:26.753791 1000065 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.333810866s)
	I0120 12:56:26.753820 1000065 crio.go:469] duration metric: took 2.33390719s to extract the tarball
	I0120 12:56:26.753827 1000065 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:56:26.794467 1000065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:56:26.838802 1000065 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:56:26.838829 1000065 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:56:26.838837 1000065 kubeadm.go:934] updating node { 192.168.50.124 8443 v1.32.0 crio true true} ...
	I0120 12:56:26.838951 1000065 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-476001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:56:26.839035 1000065 ssh_runner.go:195] Run: crio config
	I0120 12:56:26.888108 1000065 cni.go:84] Creating CNI manager for ""
	I0120 12:56:26.888149 1000065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:56:26.888162 1000065 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0120 12:56:26.888196 1000065 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.124 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-476001 NodeName:newest-cni-476001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:56:26.888358 1000065 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-476001"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:56:26.888439 1000065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:56:26.903999 1000065 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:56:26.904090 1000065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:56:26.913199 1000065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0120 12:56:26.930824 1000065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:56:26.947242 1000065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0120 12:56:26.966789 1000065 ssh_runner.go:195] Run: grep 192.168.50.124	control-plane.minikube.internal$ /etc/hosts
	I0120 12:56:26.970397 1000065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:56:26.982074 1000065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:56:27.111400 1000065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:56:27.137562 1000065 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001 for IP: 192.168.50.124
	I0120 12:56:27.137589 1000065 certs.go:194] generating shared ca certs ...
	I0120 12:56:27.137609 1000065 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:27.137780 1000065 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:56:27.137835 1000065 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:56:27.137847 1000065 certs.go:256] generating profile certs ...
	I0120 12:56:27.137997 1000065 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/client.key
	I0120 12:56:27.138075 1000065 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/apiserver.key.6b433e1b
	I0120 12:56:27.138124 1000065 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/proxy-client.key
	I0120 12:56:27.138280 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:56:27.138320 1000065 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:56:27.138333 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:56:27.138367 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:56:27.138398 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:56:27.138429 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:56:27.138484 1000065 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:56:27.139348 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:56:27.183174 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:56:27.209959 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:56:27.245931 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:56:27.285538 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 12:56:27.316659 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:56:27.351357 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:56:27.376911 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/newest-cni-476001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:56:27.402108 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:56:27.424569 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:56:27.445954 1000065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:56:27.468837 1000065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:56:27.487249 1000065 ssh_runner.go:195] Run: openssl version
	I0120 12:56:27.494297 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:56:27.504485 1000065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:56:27.508824 1000065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:56:27.508875 1000065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:56:27.514250 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:56:27.524036 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:56:27.534210 1000065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:56:27.538205 1000065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:56:27.538257 1000065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:56:27.543518 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:56:27.554963 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:56:27.565232 1000065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:56:27.569262 1000065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:56:27.569316 1000065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:56:27.574461 1000065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:56:27.584267 1000065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:56:27.588456 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:56:27.593882 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:56:27.599605 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:56:27.605329 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:56:27.610660 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:56:27.615957 1000065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:56:27.621253 1000065 kubeadm.go:392] StartCluster: {Name:newest-cni-476001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-476001 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mult
iNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:56:27.621367 1000065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:56:27.621414 1000065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:56:27.663430 1000065 cri.go:89] found id: ""
	I0120 12:56:27.663507 1000065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:56:27.674580 1000065 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:56:27.674600 1000065 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:56:27.674649 1000065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:56:27.683756 1000065 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:56:27.685134 1000065 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-476001" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:56:27.686209 1000065 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-476001" cluster setting kubeconfig missing "newest-cni-476001" context setting]
	I0120 12:56:27.687639 1000065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:27.690079 1000065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:56:27.700101 1000065 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.124
	I0120 12:56:27.700137 1000065 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:56:27.700154 1000065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:56:27.700205 1000065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:56:27.740519 1000065 cri.go:89] found id: ""
	I0120 12:56:27.740602 1000065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:56:27.761516 1000065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:56:27.771234 1000065 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:56:27.771256 1000065 kubeadm.go:157] found existing configuration files:
	
	I0120 12:56:27.771302 1000065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:56:27.781861 1000065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:56:27.781924 1000065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:56:27.790945 1000065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:56:27.800230 1000065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:56:27.800290 1000065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:56:27.809359 1000065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:56:27.817977 1000065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:56:27.818042 1000065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:56:27.827443 1000065 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:56:27.836321 1000065 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:56:27.836437 1000065 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:56:27.846541 1000065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:56:27.856605 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:27.974565 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:28.872980 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:29.074538 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:29.149132 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:29.259554 1000065 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:56:29.259637 1000065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:29.760405 1000065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:30.260490 1000065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:30.760748 1000065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:30.784104 1000065 api_server.go:72] duration metric: took 1.524551297s to wait for apiserver process to appear ...
	I0120 12:56:30.784136 1000065 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:56:30.784166 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:30.784866 1000065 api_server.go:269] stopped: https://192.168.50.124:8443/healthz: Get "https://192.168.50.124:8443/healthz": dial tcp 192.168.50.124:8443: connect: connection refused
	I0120 12:56:31.284518 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:33.253547 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:56:33.253584 1000065 api_server.go:103] status: https://192.168.50.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:56:33.253605 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:33.312775 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:56:33.312813 1000065 api_server.go:103] status: https://192.168.50.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:56:33.312832 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:33.333886 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:56:33.333917 1000065 api_server.go:103] status: https://192.168.50.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:56:33.784500 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:33.789637 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:56:33.789667 1000065 api_server.go:103] status: https://192.168.50.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:56:34.284353 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:34.291626 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:56:34.291655 1000065 api_server.go:103] status: https://192.168.50.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:56:34.785107 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:34.789691 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 200:
	ok
	I0120 12:56:34.796652 1000065 api_server.go:141] control plane version: v1.32.0
	I0120 12:56:34.796678 1000065 api_server.go:131] duration metric: took 4.012535175s to wait for apiserver health ...
	I0120 12:56:34.796688 1000065 cni.go:84] Creating CNI manager for ""
	I0120 12:56:34.796694 1000065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:56:34.798172 1000065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:56:34.799402 1000065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:56:34.809870 1000065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:56:34.831769 1000065 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:56:34.846209 1000065 system_pods.go:59] 8 kube-system pods found
	I0120 12:56:34.846245 1000065 system_pods.go:61] "coredns-668d6bf9bc-gfv45" [1301bae8-1ccb-4228-bf20-e277169576e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:56:34.846254 1000065 system_pods.go:61] "etcd-newest-cni-476001" [795b7c68-f275-438d-a2bf-4a0e18406de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:56:34.846262 1000065 system_pods.go:61] "kube-apiserver-newest-cni-476001" [3b53f18b-7f0d-40b3-b4d2-b4fb9d6df89e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:56:34.846268 1000065 system_pods.go:61] "kube-controller-manager-newest-cni-476001" [8b5a196e-a0ee-4dc1-b0fa-e060632caa01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:56:34.846275 1000065 system_pods.go:61] "kube-proxy-hgn45" [afe200f1-e639-4dc3-b665-d05f32401d79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 12:56:34.846282 1000065 system_pods.go:61] "kube-scheduler-newest-cni-476001" [504cffe0-4c70-4c05-8d6d-e91a3bb7bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:56:34.846287 1000065 system_pods.go:61] "metrics-server-f79f97bbb-8d4c6" [75b212ee-001c-472b-8f53-c4c11de8d158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:56:34.846293 1000065 system_pods.go:61] "storage-provisioner" [cace0524-58bc-4c6a-beda-c0127d6954ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:56:34.846299 1000065 system_pods.go:74] duration metric: took 14.508276ms to wait for pod list to return data ...
	I0120 12:56:34.846311 1000065 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:56:34.850808 1000065 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:56:34.850836 1000065 node_conditions.go:123] node cpu capacity is 2
	I0120 12:56:34.850850 1000065 node_conditions.go:105] duration metric: took 4.534244ms to run NodePressure ...
	I0120 12:56:34.850869 1000065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:56:35.153586 1000065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:56:35.165650 1000065 ops.go:34] apiserver oom_adj: -16
	I0120 12:56:35.165679 1000065 kubeadm.go:597] duration metric: took 7.491070849s to restartPrimaryControlPlane
	I0120 12:56:35.165692 1000065 kubeadm.go:394] duration metric: took 7.544447344s to StartCluster
	I0120 12:56:35.165716 1000065 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:35.165797 1000065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:56:35.168226 1000065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:35.168589 1000065 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:56:35.168692 1000065 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:56:35.168827 1000065 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-476001"
	I0120 12:56:35.168831 1000065 config.go:182] Loaded profile config "newest-cni-476001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:35.168854 1000065 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-476001"
	W0120 12:56:35.168875 1000065 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:56:35.168889 1000065 addons.go:69] Setting default-storageclass=true in profile "newest-cni-476001"
	I0120 12:56:35.168909 1000065 host.go:66] Checking if "newest-cni-476001" exists ...
	I0120 12:56:35.168916 1000065 addons.go:69] Setting dashboard=true in profile "newest-cni-476001"
	I0120 12:56:35.168932 1000065 addons.go:238] Setting addon dashboard=true in "newest-cni-476001"
	W0120 12:56:35.168943 1000065 addons.go:247] addon dashboard should already be in state true
	I0120 12:56:35.168910 1000065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-476001"
	I0120 12:56:35.168952 1000065 addons.go:69] Setting metrics-server=true in profile "newest-cni-476001"
	I0120 12:56:35.168971 1000065 host.go:66] Checking if "newest-cni-476001" exists ...
	I0120 12:56:35.168978 1000065 addons.go:238] Setting addon metrics-server=true in "newest-cni-476001"
	W0120 12:56:35.168987 1000065 addons.go:247] addon metrics-server should already be in state true
	I0120 12:56:35.169042 1000065 host.go:66] Checking if "newest-cni-476001" exists ...
	I0120 12:56:35.169360 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.169362 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.169375 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.169387 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.169392 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.169410 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.169443 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.169461 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.170280 1000065 out.go:177] * Verifying Kubernetes components...
	I0120 12:56:35.172440 1000065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:56:35.186022 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I0120 12:56:35.186637 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.189293 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.189316 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.189678 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0120 12:56:35.189883 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0120 12:56:35.189917 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.190167 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.190271 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.190285 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0120 12:56:35.190657 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.190712 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.190770 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.190787 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.190860 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.191151 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.191170 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.191207 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.191385 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.191410 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.191749 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.191758 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.191795 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.191921 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.191998 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:35.192499 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.192547 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.195919 1000065 addons.go:238] Setting addon default-storageclass=true in "newest-cni-476001"
	W0120 12:56:35.195943 1000065 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:56:35.195975 1000065 host.go:66] Checking if "newest-cni-476001" exists ...
	I0120 12:56:35.196327 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.196362 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.209062 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0120 12:56:35.210221 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0120 12:56:35.212054 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0120 12:56:35.223235 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.223355 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.223596 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.223928 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.223948 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.224078 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.224089 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.224173 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.224191 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.224476 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.224686 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:35.224881 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.225082 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:35.226847 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:35.227567 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:35.229117 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.229280 1000065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:56:35.229292 1000065 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:56:35.229314 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:35.230642 1000065 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:56:35.230752 1000065 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:56:35.230777 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:35.231034 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:35.232052 1000065 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:56:35.232864 1000065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:56:35.233551 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:56:35.233573 1000065 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:56:35.233594 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:35.234223 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.234402 1000065 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:56:35.234422 1000065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:56:35.234441 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:35.235264 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:35.235298 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.235655 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:35.235922 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:35.236121 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:35.236295 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:35.238188 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.238561 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:35.238586 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.238780 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:35.238815 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.238965 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:35.239142 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:35.239162 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.239193 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:35.239290 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:35.239346 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:35.239441 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:35.239534 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:35.239610 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:35.244662 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0120 12:56:35.245378 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.245912 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.245941 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.246321 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.246974 1000065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:56:35.247018 1000065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:56:35.263391 1000065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0120 12:56:35.263802 1000065 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:56:35.264317 1000065 main.go:141] libmachine: Using API Version  1
	I0120 12:56:35.264344 1000065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:56:35.264720 1000065 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:56:35.264945 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetState
	I0120 12:56:35.266541 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .DriverName
	I0120 12:56:35.266760 1000065 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:56:35.266782 1000065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:56:35.266806 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHHostname
	I0120 12:56:35.269795 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.270248 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:22:0b", ip: ""} in network mk-newest-cni-476001: {Iface:virbr2 ExpiryTime:2025-01-20 13:55:15 +0000 UTC Type:0 Mac:52:54:00:3f:22:0b Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:newest-cni-476001 Clientid:01:52:54:00:3f:22:0b}
	I0120 12:56:35.270278 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | domain newest-cni-476001 has defined IP address 192.168.50.124 and MAC address 52:54:00:3f:22:0b in network mk-newest-cni-476001
	I0120 12:56:35.270471 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHPort
	I0120 12:56:35.270648 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHKeyPath
	I0120 12:56:35.270822 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .GetSSHUsername
	I0120 12:56:35.271005 1000065 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/newest-cni-476001/id_rsa Username:docker}
	I0120 12:56:35.422785 1000065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:56:35.447531 1000065 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:56:35.447618 1000065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:56:35.468137 1000065 api_server.go:72] duration metric: took 299.49711ms to wait for apiserver process to appear ...
	I0120 12:56:35.468174 1000065 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:56:35.468202 1000065 api_server.go:253] Checking apiserver healthz at https://192.168.50.124:8443/healthz ...
	I0120 12:56:35.476566 1000065 api_server.go:279] https://192.168.50.124:8443/healthz returned 200:
	ok
	I0120 12:56:35.479377 1000065 api_server.go:141] control plane version: v1.32.0
	I0120 12:56:35.479401 1000065 api_server.go:131] duration metric: took 11.219372ms to wait for apiserver health ...
	I0120 12:56:35.479410 1000065 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:56:35.487730 1000065 system_pods.go:59] 8 kube-system pods found
	I0120 12:56:35.487767 1000065 system_pods.go:61] "coredns-668d6bf9bc-gfv45" [1301bae8-1ccb-4228-bf20-e277169576e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:56:35.487777 1000065 system_pods.go:61] "etcd-newest-cni-476001" [795b7c68-f275-438d-a2bf-4a0e18406de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:56:35.487788 1000065 system_pods.go:61] "kube-apiserver-newest-cni-476001" [3b53f18b-7f0d-40b3-b4d2-b4fb9d6df89e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:56:35.487795 1000065 system_pods.go:61] "kube-controller-manager-newest-cni-476001" [8b5a196e-a0ee-4dc1-b0fa-e060632caa01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:56:35.487799 1000065 system_pods.go:61] "kube-proxy-hgn45" [afe200f1-e639-4dc3-b665-d05f32401d79] Running
	I0120 12:56:35.487805 1000065 system_pods.go:61] "kube-scheduler-newest-cni-476001" [504cffe0-4c70-4c05-8d6d-e91a3bb7bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:56:35.487810 1000065 system_pods.go:61] "metrics-server-f79f97bbb-8d4c6" [75b212ee-001c-472b-8f53-c4c11de8d158] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:56:35.487817 1000065 system_pods.go:61] "storage-provisioner" [cace0524-58bc-4c6a-beda-c0127d6954ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:56:35.487823 1000065 system_pods.go:74] duration metric: took 8.407719ms to wait for pod list to return data ...
	I0120 12:56:35.487832 1000065 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:56:35.496898 1000065 default_sa.go:45] found service account: "default"
	I0120 12:56:35.496919 1000065 default_sa.go:55] duration metric: took 9.080851ms for default service account to be created ...
	I0120 12:56:35.496934 1000065 kubeadm.go:582] duration metric: took 328.301023ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 12:56:35.496960 1000065 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:56:35.500887 1000065 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:56:35.500908 1000065 node_conditions.go:123] node cpu capacity is 2
	I0120 12:56:35.500916 1000065 node_conditions.go:105] duration metric: took 3.951372ms to run NodePressure ...
	I0120 12:56:35.500928 1000065 start.go:241] waiting for startup goroutines ...
	I0120 12:56:35.554848 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:56:35.554880 1000065 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:56:35.580350 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:56:35.580377 1000065 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:56:35.582842 1000065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:56:35.616978 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:56:35.617003 1000065 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:56:35.617360 1000065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:56:35.617380 1000065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:56:35.645902 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:56:35.645936 1000065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:56:35.660434 1000065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:56:35.673644 1000065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:56:35.673678 1000065 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:56:35.803815 1000065 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:56:35.803851 1000065 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:56:35.805618 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:56:35.805640 1000065 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:56:35.886593 1000065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:56:35.898784 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:56:35.898820 1000065 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:56:35.947072 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:56:35.947108 1000065 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:56:36.015968 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:56:36.015999 1000065 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:56:36.121244 1000065 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:56:36.121280 1000065 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:56:36.182210 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:36.182237 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:36.182593 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:36.182619 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:36.182630 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:36.182639 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:36.182909 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:36.182928 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:36.182944 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:36.189892 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:36.189909 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:36.190146 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:36.190165 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:36.208932 1000065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:56:37.594611 1000065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.934130005s)
	I0120 12:56:37.594670 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:37.594682 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:37.594676 1000065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.708034927s)
	I0120 12:56:37.594728 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:37.594744 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:37.595025 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:37.595054 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:37.595059 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:37.595070 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:37.595075 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:37.595081 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:37.595088 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:37.595091 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:37.595097 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:37.595103 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:37.595419 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:37.595444 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:37.595481 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:37.595493 1000065 addons.go:479] Verifying addon metrics-server=true in "newest-cni-476001"
	I0120 12:56:37.597000 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:37.597047 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:37.597070 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:38.005210 1000065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.796211364s)
	I0120 12:56:38.005281 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:38.005401 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:38.005845 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:38.005865 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:38.005869 1000065 main.go:141] libmachine: (newest-cni-476001) DBG | Closing plugin on server side
	I0120 12:56:38.005879 1000065 main.go:141] libmachine: Making call to close driver server
	I0120 12:56:38.005892 1000065 main.go:141] libmachine: (newest-cni-476001) Calling .Close
	I0120 12:56:38.006162 1000065 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:56:38.006181 1000065 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:56:38.007916 1000065 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-476001 addons enable metrics-server
	
	I0120 12:56:38.009317 1000065 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0120 12:56:38.010879 1000065 addons.go:514] duration metric: took 2.842190594s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0120 12:56:38.010914 1000065 start.go:246] waiting for cluster config update ...
	I0120 12:56:38.010925 1000065 start.go:255] writing updated cluster config ...
	I0120 12:56:38.011369 1000065 ssh_runner.go:195] Run: rm -f paused
	I0120 12:56:38.064877 1000065 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:56:38.066411 1000065 out.go:177] * Done! kubectl is now configured to use "newest-cni-476001" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.066123893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412,PodSandboxId:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377781696932494,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec,PodSandboxId:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376495722664687,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b,PodSandboxId:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376487548847560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d,PodSandboxId:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486732833727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64,PodSandboxId:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486562246063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5,PodSandboxId:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376485522140235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660,PodSandboxId:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4
eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376475392646713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582,PodSandboxId:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0
ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376475368714376,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973,PodSandboxId:4d9c15a7d6dbd7e8963f3c1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849
f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376475353525659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583,PodSandboxId:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376475351502648,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764,PodSandboxId:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376185039900371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cb01a7e-0ec7-429d-8d33-2f340f81ed0b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.066681642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96318d33-2e8c-4b35-b5f0-5f1d1b16338b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.066745040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96318d33-2e8c-4b35-b5f0-5f1d1b16338b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.067072889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412,PodSandboxId:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377781696932494,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec,PodSandboxId:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376495722664687,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b,PodSandboxId:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376487548847560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d,PodSandboxId:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486732833727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64,PodSandboxId:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486562246063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5,PodSandboxId:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376485522140235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660,PodSandboxId:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4
eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376475392646713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582,PodSandboxId:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0
ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376475368714376,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973,PodSandboxId:4d9c15a7d6dbd7e8963f3c1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849
f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376475353525659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583,PodSandboxId:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376475351502648,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764,PodSandboxId:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376185039900371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96318d33-2e8c-4b35-b5f0-5f1d1b16338b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.068292922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41d070db-7a27-4df2-bc5a-192b7fa71c06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.068407054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41d070db-7a27-4df2-bc5a-192b7fa71c06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.068692486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412,PodSandboxId:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377781696932494,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec,PodSandboxId:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376495722664687,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b,PodSandboxId:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376487548847560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d,PodSandboxId:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486732833727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64,PodSandboxId:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486562246063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5,PodSandboxId:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376485522140235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660,PodSandboxId:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4
eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376475392646713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582,PodSandboxId:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0
ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376475368714376,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973,PodSandboxId:4d9c15a7d6dbd7e8963f3c1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849
f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376475353525659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583,PodSandboxId:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376475351502648,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764,PodSandboxId:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376185039900371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41d070db-7a27-4df2-bc5a-192b7fa71c06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.069388373Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=01da175a-3c8a-4739-a4e9-194be01bb498 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.069724197Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-wbp44,Uid:d3a6da90-14b1-44c8-b292-0e044ef1a038,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376488210614180,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:47.898732199Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-86c6bf9756-hqndr,Uid:8305e335-1e15-4690-aee4-a68de05a85ff,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376488210003750,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:47.902171754Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:31b7596f0bd3429941215cd3a3b5376607ecea8c0b57540aca13d95adae960b9,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-4vcgc,Uid:2108ac96-d8cd-429f-ac2d-babc6d97267b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376487553295342,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.p
od.name: metrics-server-f79f97bbb-4vcgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2108ac96-d8cd-429f-ac2d-babc6d97267b,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:47.227930912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:953b33a8-d2a0-447d-a01b-49350c6555f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376487446165238,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annot
ations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-20T12:34:47.133698523Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-cf5ts,Uid:91648c6f-7cef-427f-82f3-7572a9b5d80e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376485692887083,Labels:map[string]string{io.kubernetes.container
.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:45.370067151Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-gr6pw,Uid:6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376485662589875,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:45.350458598Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSand
box{Id:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&PodSandboxMetadata{Name:kube-proxy-xrg5x,Uid:a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376485308790998,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T12:34:44.971732722Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-987349,Uid:db13ee178a4805337ef83e44867afa59,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737376475198117903,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.170:8443,kubernetes.io/config.hash: db13ee178a4805337ef83e44867afa59,kubernetes.io/config.seen: 2025-01-20T12:34:34.732411573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-987349,Uid:1f2db47d8f0a645f120b7173658b9a65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376475194935560,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,tier: control-plan
e,},Annotations:map[string]string{kubernetes.io/config.hash: 1f2db47d8f0a645f120b7173658b9a65,kubernetes.io/config.seen: 2025-01-20T12:34:34.732412630Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-987349,Uid:7b34f790f6e300c4d1df3f7fd74f6779,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376475178662483,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.170:2379,kubernetes.io/config.hash: 7b34f790f6e300c4d1df3f7fd74f6779,kubernetes.io/config.seen: 2025-01-20T12:34:34.732410251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d9c15a7d6dbd7e8963f3c
1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-987349,Uid:d6727b264b7c458ad94e48872610ddb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737376475169498132,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6727b264b7c458ad94e48872610ddb7,kubernetes.io/config.seen: 2025-01-20T12:34:34.732406593Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-987349,Uid:db13ee178a4805337ef83e44867afa59,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1737376184176633229,Labels:map[string]string{component: kube-apiserver,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.170:8443,kubernetes.io/config.hash: db13ee178a4805337ef83e44867afa59,kubernetes.io/config.seen: 2025-01-20T12:29:43.630243239Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=01da175a-3c8a-4739-a4e9-194be01bb498 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.109941219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7831fc7-3303-4977-bde8-e3be1c63fec9 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.110042331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7831fc7-3303-4977-bde8-e3be1c63fec9 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.111529873Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa6cd358-c589-4306-ad2f-b4e5847f786b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.112118280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377801112090538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa6cd358-c589-4306-ad2f-b4e5847f786b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.112872609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8241e5ac-bd52-491c-b7de-32fd4c790240 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.112960901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8241e5ac-bd52-491c-b7de-32fd4c790240 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.113288561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412,PodSandboxId:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377781696932494,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec,PodSandboxId:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376495722664687,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b,PodSandboxId:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376487548847560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d,PodSandboxId:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486732833727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64,PodSandboxId:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486562246063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5,PodSandboxId:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376485522140235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660,PodSandboxId:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4
eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376475392646713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582,PodSandboxId:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0
ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376475368714376,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973,PodSandboxId:4d9c15a7d6dbd7e8963f3c1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849
f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376475353525659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583,PodSandboxId:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376475351502648,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764,PodSandboxId:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376185039900371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8241e5ac-bd52-491c-b7de-32fd4c790240 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.122573607Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=9ee40665-5b23-4e98-b642-b8633d61ce58 name=/runtime.v1.RuntimeService/Status
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.122658466Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9ee40665-5b23-4e98-b642-b8633d61ce58 name=/runtime.v1.RuntimeService/Status
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.163120311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=990a73f4-d141-4309-a691-39cd836f6e76 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.163226574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=990a73f4-d141-4309-a691-39cd836f6e76 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.164574296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88f04212-a1a7-45f1-ba0b-20f0d0882616 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.165419695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377801165324038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88f04212-a1a7-45f1-ba0b-20f0d0882616 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.166225059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=542be142-efc1-4db1-86b2-2c997e2e051e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.166283799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=542be142-efc1-4db1-86b2-2c997e2e051e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:56:41 embed-certs-987349 crio[726]: time="2025-01-20 12:56:41.166597169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412,PodSandboxId:095c48d982817add96efa4ecaee8d2ffe32a9d848dee9733c61986594ca8e5cf,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377781696932494,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-hqndr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8305e335-1e15-4690-aee4-a68de05a85ff,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec,PodSandboxId:0efcf9e9b960613635d1daf08a339654b6958bb1c2e55245c47078aa8c70ff02,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376495722664687,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wbp44,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: d3a6da90-14b1-44c8-b292-0e044ef1a038,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b,PodSandboxId:2f58a0db99563e51253e70a20592193cf6523afb951fc407365321d032a86dea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376487548847560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953b33a8-d2a0-447d-a01b-49350c6555f7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d,PodSandboxId:8ba738ffaac3397b5169e0cc01af659d34ff21c2dec2e45f8735198c7a47b8e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486732833727,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gr6pw,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff16a87-0a5e-4d82-b13d-2c72afef6dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64,PodSandboxId:35ebdb891372d09ec474473a9846a6e8ee5c8dac7d15c0c0c6ae17c6e15d25c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376486562246063,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cf5ts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91648c6f-7cef-427f-82f3-7572a9b5d80e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5,PodSandboxId:c50ccb3b5235f4c05da8b8ee07c2209e934053ed45b771ca1145c296c88749a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376485522140235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xrg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660,PodSandboxId:94c625cdf7666877cc760763141edec48e8f0438ef05defc298cae0860f97960,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4
eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376475392646713,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582,PodSandboxId:10d82d8601366fc207ca6fcaf9f1b94bf6d3f428f133f7f6b872f51625da6d44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0
ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376475368714376,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f2db47d8f0a645f120b7173658b9a65,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973,PodSandboxId:4d9c15a7d6dbd7e8963f3c1a8b378804cd7343ea923e52bd815b0223025112d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849
f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376475353525659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6727b264b7c458ad94e48872610ddb7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583,PodSandboxId:48003ac3674f1b851e01a068415748b53be3de8e8ee7bb1229b4bdc76999438d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca
1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376475351502648,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b34f790f6e300c4d1df3f7fd74f6779,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764,PodSandboxId:68e8434976abd18b29a2dd7639801b751e59d9e9ccb6b32c0d6db611486046a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376185039900371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-987349,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db13ee178a4805337ef83e44867afa59,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=542be142-efc1-4db1-86b2-2c997e2e051e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	51bb891950cca       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   9                   095c48d982817       dashboard-metrics-scraper-86c6bf9756-hqndr
	139ce8185dd17       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   0efcf9e9b9606       kubernetes-dashboard-7779f9b69b-wbp44
	77313e42d73c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   2f58a0db99563       storage-provisioner
	9c7020bcdadd4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   8ba738ffaac33       coredns-668d6bf9bc-gr6pw
	ce27c14f43725       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   35ebdb891372d       coredns-668d6bf9bc-cf5ts
	89f85dd6796e0       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           21 minutes ago      Running             kube-proxy                  0                   c50ccb3b5235f       kube-proxy-xrg5x
	8638acaae1002       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           22 minutes ago      Running             kube-apiserver              2                   94c625cdf7666       kube-apiserver-embed-certs-987349
	30c85d4798ae1       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           22 minutes ago      Running             kube-controller-manager     2                   10d82d8601366       kube-controller-manager-embed-certs-987349
	3bdd01b159302       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           22 minutes ago      Running             kube-scheduler              2                   4d9c15a7d6dbd       kube-scheduler-embed-certs-987349
	b3dd779f91d4c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           22 minutes ago      Running             etcd                        2                   48003ac3674f1       etcd-embed-certs-987349
	52866305914d6       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           26 minutes ago      Exited              kube-apiserver              1                   68e8434976abd       kube-apiserver-embed-certs-987349
	
	
	==> coredns [9c7020bcdadd45ca36254c0804806c7a755196f633555327b7a9dcc02218b38d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ce27c14f4372580546a1dee06e1ca0af9a4ce57003ff1a1e953a73c369e26a64] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-987349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-987349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=embed-certs-987349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_34_41_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:34:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-987349
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:56:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:54:44 +0000   Mon, 20 Jan 2025 12:34:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:54:44 +0000   Mon, 20 Jan 2025 12:34:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:54:44 +0000   Mon, 20 Jan 2025 12:34:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:54:44 +0000   Mon, 20 Jan 2025 12:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.170
	  Hostname:    embed-certs-987349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08151781dfd04743b6147bd9a374c7c3
	  System UUID:                08151781-dfd0-4743-b614-7bd9a374c7c3
	  Boot ID:                    2128f6aa-5711-4223-8624-9b60c9e42c07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-cf5ts                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-gr6pw                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-987349                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-987349             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-987349    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-xrg5x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-987349             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-4vcgc                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-hqndr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wbp44         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node embed-certs-987349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node embed-certs-987349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node embed-certs-987349 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node embed-certs-987349 event: Registered Node embed-certs-987349 in Controller
	
	
	==> dmesg <==
	[  +0.037188] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.919029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.218072] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.620485] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.965723] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.056913] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059531] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.187729] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.115989] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.250091] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +3.922735] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +2.142538] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.054764] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.492737] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.885484] kauditd_printk_skb: 92 callbacks suppressed
	[Jan20 12:34] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.698592] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +4.742108] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.620860] systemd-fstab-generator[3023]: Ignoring "noauto" option for root device
	[  +4.894432] systemd-fstab-generator[3153]: Ignoring "noauto" option for root device
	[  +0.147751] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.663085] kauditd_printk_skb: 112 callbacks suppressed
	[Jan20 12:35] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [b3dd779f91d4ca7ffe33376ac388ed52fdb20298b0b2fc5d836c55a0d27c7583] <==
	{"level":"info","ts":"2025-01-20T12:55:59.737429Z","caller":"traceutil/trace.go:171","msg":"trace[111273378] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1943; }","duration":"286.332892ms","start":"2025-01-20T12:55:59.451086Z","end":"2025-01-20T12:55:59.737419Z","steps":["trace[111273378] 'read index received'  (duration: 130.812787ms)","trace[111273378] 'applied index is now lower than readState.Index'  (duration: 155.519317ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:55:59.737515Z","caller":"traceutil/trace.go:171","msg":"trace[421358296] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"286.538197ms","start":"2025-01-20T12:55:59.450967Z","end":"2025-01-20T12:55:59.737505Z","steps":["trace[421358296] 'process raft request'  (duration: 130.988177ms)","trace[421358296] 'compare'  (duration: 132.835247ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:55:59.737917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.819843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:55:59.739495Z","caller":"traceutil/trace.go:171","msg":"trace[615816488] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1664; }","duration":"288.415704ms","start":"2025-01-20T12:55:59.451063Z","end":"2025-01-20T12:55:59.739478Z","steps":["trace[615816488] 'agreement among raft nodes before linearized reading'  (duration: 286.815139ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:55:59.992487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.644131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:55:59.992562Z","caller":"traceutil/trace.go:171","msg":"trace[1542656024] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1664; }","duration":"174.743087ms","start":"2025-01-20T12:55:59.817804Z","end":"2025-01-20T12:55:59.992547Z","steps":["trace[1542656024] 'range keys from in-memory index tree'  (duration: 174.602795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:55:59.992518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.525859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:55:59.992691Z","caller":"traceutil/trace.go:171","msg":"trace[651429978] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1664; }","duration":"141.763096ms","start":"2025-01-20T12:55:59.850913Z","end":"2025-01-20T12:55:59.992676Z","steps":["trace[651429978] 'range keys from in-memory index tree'  (duration: 141.452213ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:56:00.676836Z","caller":"traceutil/trace.go:171","msg":"trace[1633655826] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"275.03196ms","start":"2025-01-20T12:56:00.401789Z","end":"2025-01-20T12:56:00.676821Z","steps":["trace[1633655826] 'process raft request'  (duration: 274.837385ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:56:00.679765Z","caller":"traceutil/trace.go:171","msg":"trace[1230433191] linearizableReadLoop","detail":"{readStateIndex:1945; appliedIndex:1945; }","duration":"228.297518ms","start":"2025-01-20T12:56:00.451452Z","end":"2025-01-20T12:56:00.679749Z","steps":["trace[1230433191] 'read index received'  (duration: 228.291ms)","trace[1230433191] 'applied index is now lower than readState.Index'  (duration: 5.408µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:56:00.679869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.131174ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:00.679899Z","caller":"traceutil/trace.go:171","msg":"trace[504922164] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1665; }","duration":"125.17188ms","start":"2025-01-20T12:56:00.554720Z","end":"2025-01-20T12:56:00.679891Z","steps":["trace[504922164] 'agreement among raft nodes before linearized reading'  (duration: 125.115279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:00.680168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.529953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:00.680246Z","caller":"traceutil/trace.go:171","msg":"trace[672428726] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1665; }","duration":"228.781118ms","start":"2025-01-20T12:56:00.451426Z","end":"2025-01-20T12:56:00.680207Z","steps":["trace[672428726] 'agreement among raft nodes before linearized reading'  (duration: 228.486473ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.197582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.060235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.197704Z","caller":"traceutil/trace.go:171","msg":"trace[2003130001] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1692; }","duration":"148.216322ms","start":"2025-01-20T12:56:29.049467Z","end":"2025-01-20T12:56:29.197683Z","steps":["trace[2003130001] 'range keys from in-memory index tree'  (duration: 147.958279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.756447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.214122ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.756526Z","caller":"traceutil/trace.go:171","msg":"trace[1570702483] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1692; }","duration":"202.295114ms","start":"2025-01-20T12:56:29.554214Z","end":"2025-01-20T12:56:29.756509Z","steps":["trace[1570702483] 'range keys from in-memory index tree'  (duration: 202.163434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.756786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.272798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16670800264066379185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.170\" mod_revision:1681 > success:<request_put:<key:\"/registry/masterleases/192.168.72.170\" value_size:68 lease:7447428227211603375 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.170\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-20T12:56:29.757173Z","caller":"traceutil/trace.go:171","msg":"trace[907024558] linearizableReadLoop","detail":"{readStateIndex:1980; appliedIndex:1979; }","duration":"307.534061ms","start":"2025-01-20T12:56:29.449629Z","end":"2025-01-20T12:56:29.757163Z","steps":["trace[907024558] 'read index received'  (duration: 22.800815ms)","trace[907024558] 'applied index is now lower than readState.Index'  (duration: 284.732155ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:56:29.757179Z","caller":"traceutil/trace.go:171","msg":"trace[1848808645] transaction","detail":"{read_only:false; response_revision:1693; number_of_response:1; }","duration":"405.675742ms","start":"2025-01-20T12:56:29.351490Z","end":"2025-01-20T12:56:29.757166Z","steps":["trace[1848808645] 'process raft request'  (duration: 120.954413ms)","trace[1848808645] 'compare'  (duration: 284.182577ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:56:29.757365Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.699566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.757988Z","caller":"traceutil/trace.go:171","msg":"trace[1009511699] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1693; }","duration":"308.377903ms","start":"2025-01-20T12:56:29.449599Z","end":"2025-01-20T12:56:29.757977Z","steps":["trace[1009511699] 'agreement among raft nodes before linearized reading'  (duration: 307.628444ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.758063Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:56:29.449580Z","time spent":"308.466938ms","remote":"127.0.0.1:52522","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T12:56:29.757531Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:56:29.351476Z","time spent":"405.961124ms","remote":"127.0.0.1:52342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":121,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.72.170\" mod_revision:1681 > success:<request_put:<key:\"/registry/masterleases/192.168.72.170\" value_size:68 lease:7447428227211603375 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.170\" > >"}
	
	
	==> kernel <==
	 12:56:41 up 27 min,  0 users,  load average: 0.59, 0.37, 0.23
	Linux embed-certs-987349 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [52866305914d666e8a2407de69cf9366d4340033b9a8167d54c73c9dffaf4764] <==
	W0120 12:34:30.557813       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.630089       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.697171       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.697618       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.761636       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.772425       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.845087       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.864564       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.918097       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:30.942773       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.062851       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.121117       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.211110       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.227307       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.404700       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.414286       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.446319       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.469969       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.487593       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.497050       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.501531       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.521551       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.561140       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:31.784848       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:34:32.047745       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8638acaae100210ecf282058dd62a5e3ea7b5c79db7a07ad094ace2ac6ef2660] <==
	I0120 12:52:38.717499       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:52:38.717566       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:54:37.717636       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:37.718132       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:54:38.720610       1 handler_proxy.go:99] no RequestInfo found in the context
	W0120 12:54:38.720685       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:54:38.720869       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0120 12:54:38.720944       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:54:38.722067       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:54:38.722116       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:55:38.722590       1 handler_proxy.go:99] no RequestInfo found in the context
	W0120 12:55:38.722590       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:55:38.722866       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0120 12:55:38.722943       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 12:55:38.724063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:55:38.724114       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [30c85d4798ae1ae84b15f3986c4dbc996a3cfb0edffdbf27835ddea93015f582] <==
	E0120 12:51:44.459433       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:51:44.581119       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:14.465114       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:14.590008       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:52:44.472547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:44.599139       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:14.479139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:14.606705       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:44.484934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:44.612739       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:14.492313       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:14.619962       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:44.500202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:44.632127       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:54:44.643023       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-987349"
	E0120 12:55:14.510605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:14.642998       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:55:44.516770       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:44.649751       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:56:06.696980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="93.311µs"
	E0120 12:56:14.523286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:56:14.657821       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:56:20.698916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="49.266µs"
	I0120 12:56:21.899755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="65.388µs"
	I0120 12:56:25.293667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="82.5µs"
	
	
	==> kube-proxy [89f85dd6796e0c2675ab1f4e799751d229d1d02176268439c098cd50d8ce11d5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:34:46.043016       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:34:46.059645       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.170"]
	E0120 12:34:46.059705       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:34:46.144418       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:34:46.144452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:34:46.144473       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:34:46.147427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:34:46.147734       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:34:46.147749       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:34:46.154053       1 config.go:199] "Starting service config controller"
	I0120 12:34:46.154090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:34:46.154137       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:34:46.154144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:34:46.154875       1 config.go:329] "Starting node config controller"
	I0120 12:34:46.154885       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:34:46.254523       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 12:34:46.254554       1 shared_informer.go:320] Caches are synced for service config
	I0120 12:34:46.254914       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3bdd01b159302164bf969db127c02c6549cb8d4c1ba7d167c2053c9e862b8973] <==
	W0120 12:34:37.747311       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 12:34:37.747732       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:37.747373       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:34:37.747775       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:37.747402       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:34:37.747817       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:37.747422       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 12:34:37.747866       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:37.747457       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 12:34:37.747912       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.564561       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 12:34:38.564607       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.647490       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:34:38.647657       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.731636       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:34:38.731685       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 12:34:38.748369       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:34:38.748411       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.768909       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 12:34:38.768950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.798369       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:34:38.798725       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 12:34:38.905128       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:34:38.905180       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0120 12:34:40.841921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:56:09 embed-certs-987349 kubelet[3030]: I0120 12:56:09.675062    3030 scope.go:117] "RemoveContainer" containerID="122c1cca7ce1989e7721706974ea751e1e99fb3f4cbd5f47aaec902314429ca6"
	Jan 20 12:56:09 embed-certs-987349 kubelet[3030]: E0120 12:56:09.675682    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-hqndr_kubernetes-dashboard(8305e335-1e15-4690-aee4-a68de05a85ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-hqndr" podUID="8305e335-1e15-4690-aee4-a68de05a85ff"
	Jan 20 12:56:11 embed-certs-987349 kubelet[3030]: E0120 12:56:11.041538    3030 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377771040969101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:11 embed-certs-987349 kubelet[3030]: E0120 12:56:11.042035    3030 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377771040969101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:20 embed-certs-987349 kubelet[3030]: E0120 12:56:20.677118    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4vcgc" podUID="2108ac96-d8cd-429f-ac2d-babc6d97267b"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: E0120 12:56:21.044628    3030 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377781044042760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: E0120 12:56:21.044730    3030 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377781044042760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: I0120 12:56:21.675239    3030 scope.go:117] "RemoveContainer" containerID="122c1cca7ce1989e7721706974ea751e1e99fb3f4cbd5f47aaec902314429ca6"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: I0120 12:56:21.878124    3030 scope.go:117] "RemoveContainer" containerID="122c1cca7ce1989e7721706974ea751e1e99fb3f4cbd5f47aaec902314429ca6"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: I0120 12:56:21.878412    3030 scope.go:117] "RemoveContainer" containerID="51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412"
	Jan 20 12:56:21 embed-certs-987349 kubelet[3030]: E0120 12:56:21.878566    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-hqndr_kubernetes-dashboard(8305e335-1e15-4690-aee4-a68de05a85ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-hqndr" podUID="8305e335-1e15-4690-aee4-a68de05a85ff"
	Jan 20 12:56:25 embed-certs-987349 kubelet[3030]: I0120 12:56:25.277268    3030 scope.go:117] "RemoveContainer" containerID="51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412"
	Jan 20 12:56:25 embed-certs-987349 kubelet[3030]: E0120 12:56:25.277581    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-hqndr_kubernetes-dashboard(8305e335-1e15-4690-aee4-a68de05a85ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-hqndr" podUID="8305e335-1e15-4690-aee4-a68de05a85ff"
	Jan 20 12:56:31 embed-certs-987349 kubelet[3030]: E0120 12:56:31.046626    3030 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377791046102967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:31 embed-certs-987349 kubelet[3030]: E0120 12:56:31.047082    3030 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377791046102967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:34 embed-certs-987349 kubelet[3030]: E0120 12:56:34.677528    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4vcgc" podUID="2108ac96-d8cd-429f-ac2d-babc6d97267b"
	Jan 20 12:56:35 embed-certs-987349 kubelet[3030]: I0120 12:56:35.675434    3030 scope.go:117] "RemoveContainer" containerID="51bb891950cca1451e0a514af521a5272de9baf90097537b982ac68b8c9cb412"
	Jan 20 12:56:35 embed-certs-987349 kubelet[3030]: E0120 12:56:35.675819    3030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-hqndr_kubernetes-dashboard(8305e335-1e15-4690-aee4-a68de05a85ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-hqndr" podUID="8305e335-1e15-4690-aee4-a68de05a85ff"
	Jan 20 12:56:40 embed-certs-987349 kubelet[3030]: E0120 12:56:40.710003    3030 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 12:56:40 embed-certs-987349 kubelet[3030]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 12:56:40 embed-certs-987349 kubelet[3030]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 12:56:40 embed-certs-987349 kubelet[3030]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 12:56:40 embed-certs-987349 kubelet[3030]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 12:56:41 embed-certs-987349 kubelet[3030]: E0120 12:56:41.049715    3030 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377801049227245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:41 embed-certs-987349 kubelet[3030]: E0120 12:56:41.049760    3030 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377801049227245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [139ce8185dd17c5a7ac691b17d32befd2ac55a1385e186f2a79b517f02ecdfec] <==
	2025/01/20 12:44:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:44:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:56:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [77313e42d73c5d3b5f10e6204f5d7007714640bcbe0dbf4bf6a24ccf164c591b] <==
	I0120 12:34:47.744836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:34:47.796225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:34:47.796276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:34:47.850914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:34:47.851088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-987349_98ab447e-1a60-4431-9c54-f377099e2d80!
	I0120 12:34:47.863306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6858a3a2-e2df-4db4-81ce-b0333842f477", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-987349_98ab447e-1a60-4431-9c54-f377099e2d80 became leader
	I0120 12:34:47.953445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-987349_98ab447e-1a60-4431-9c54-f377099e2d80!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-987349 -n embed-certs-987349
E0120 12:56:42.642968  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-987349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-4vcgc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-987349 describe pod metrics-server-f79f97bbb-4vcgc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-987349 describe pod metrics-server-f79f97bbb-4vcgc: exit status 1 (80.484681ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-4vcgc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-987349 describe pod metrics-server-f79f97bbb-4vcgc: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1645.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-134433 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-134433 create -f testdata/busybox.yaml: exit status 1 (45.382179ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-134433" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-134433 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 6 (248.932814ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:29:28.056174  992801 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-134433" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 6 (247.026994ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:29:28.302656  992831 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-134433" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-134433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0120 12:29:37.399725  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-134433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.818790372s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-134433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-134433 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-134433 describe deploy/metrics-server -n kube-system: exit status 1 (50.133516ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-134433" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-134433 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 6 (230.52796ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:31:08.405175  993456 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-134433" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (100.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1643.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-981597 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-981597 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (27m21.264100348s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-981597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-981597" primary control-plane node in "default-k8s-diff-port-981597" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-981597" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-981597 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:30:09.321582  993131 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:30:09.321679  993131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:30:09.321687  993131 out.go:358] Setting ErrFile to fd 2...
	I0120 12:30:09.321692  993131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:30:09.321848  993131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:30:09.322428  993131 out.go:352] Setting JSON to false
	I0120 12:30:09.323481  993131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18752,"bootTime":1737357457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:30:09.323586  993131 start.go:139] virtualization: kvm guest
	I0120 12:30:09.326446  993131 out.go:177] * [default-k8s-diff-port-981597] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:30:09.328060  993131 notify.go:220] Checking for updates...
	I0120 12:30:09.328132  993131 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:30:09.329544  993131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:30:09.331293  993131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:30:09.332788  993131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:30:09.334094  993131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:30:09.335345  993131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:30:09.336947  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:30:09.337316  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:30:09.337366  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:30:09.352981  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0120 12:30:09.353482  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:30:09.354079  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:30:09.354109  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:30:09.354455  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:30:09.354706  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:09.355010  993131 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:30:09.355475  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:30:09.355526  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:30:09.370954  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0120 12:30:09.371505  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:30:09.372162  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:30:09.372209  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:30:09.372547  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:30:09.372767  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:09.413273  993131 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:30:09.414648  993131 start.go:297] selected driver: kvm2
	I0120 12:30:09.414670  993131 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-981597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8
s-diff-port-981597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:30:09.414785  993131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:30:09.415522  993131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:30:09.415623  993131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:30:09.431627  993131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:30:09.432025  993131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:30:09.432065  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:30:09.432111  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:30:09.432179  993131 start.go:340] cluster config:
	{Name:default-k8s-diff-port-981597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-981597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:30:09.432312  993131 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:30:09.434159  993131 out.go:177] * Starting "default-k8s-diff-port-981597" primary control-plane node in "default-k8s-diff-port-981597" cluster
	I0120 12:30:09.435405  993131 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:30:09.435459  993131 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:30:09.435471  993131 cache.go:56] Caching tarball of preloaded images
	I0120 12:30:09.435575  993131 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:30:09.435589  993131 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:30:09.435695  993131 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/config.json ...
	I0120 12:30:09.435905  993131 start.go:360] acquireMachinesLock for default-k8s-diff-port-981597: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:30:09.435968  993131 start.go:364] duration metric: took 39.243µs to acquireMachinesLock for "default-k8s-diff-port-981597"
	I0120 12:30:09.435989  993131 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:30:09.435999  993131 fix.go:54] fixHost starting: 
	I0120 12:30:09.436320  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:30:09.436369  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:30:09.453317  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0120 12:30:09.453784  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:30:09.454283  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:30:09.454309  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:30:09.454741  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:30:09.454999  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:09.455177  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:30:09.456826  993131 fix.go:112] recreateIfNeeded on default-k8s-diff-port-981597: state=Stopped err=<nil>
	I0120 12:30:09.456879  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	W0120 12:30:09.457039  993131 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:30:09.458855  993131 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-981597" ...
	I0120 12:30:09.460043  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Start
	I0120 12:30:09.460274  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) starting domain...
	I0120 12:30:09.460296  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) ensuring networks are active...
	I0120 12:30:09.461047  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Ensuring network default is active
	I0120 12:30:09.461482  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Ensuring network mk-default-k8s-diff-port-981597 is active
	I0120 12:30:09.461940  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) getting domain XML...
	I0120 12:30:09.463020  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) creating domain...
	I0120 12:30:10.733405  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) waiting for IP...
	I0120 12:30:10.734173  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:10.734596  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:10.734726  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:10.734594  993183 retry.go:31] will retry after 251.386895ms: waiting for domain to come up
	I0120 12:30:10.988178  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:10.988751  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:10.988793  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:10.988713  993183 retry.go:31] will retry after 355.853976ms: waiting for domain to come up
	I0120 12:30:11.346478  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:11.346938  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:11.346964  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:11.346900  993183 retry.go:31] will retry after 294.291575ms: waiting for domain to come up
	I0120 12:30:11.643185  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:11.643688  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:11.643767  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:11.643676  993183 retry.go:31] will retry after 575.563187ms: waiting for domain to come up
	I0120 12:30:12.220479  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:12.221135  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:12.221177  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:12.221075  993183 retry.go:31] will retry after 742.371802ms: waiting for domain to come up
	I0120 12:30:12.965048  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:12.965547  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:12.965632  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:12.965537  993183 retry.go:31] will retry after 935.843473ms: waiting for domain to come up
	I0120 12:30:13.902600  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:13.903237  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:13.903262  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:13.903192  993183 retry.go:31] will retry after 1.150438224s: waiting for domain to come up
	I0120 12:30:15.054991  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:15.055521  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:15.055553  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:15.055485  993183 retry.go:31] will retry after 1.401706652s: waiting for domain to come up
	I0120 12:30:16.458507  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:16.459076  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:16.459102  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:16.459013  993183 retry.go:31] will retry after 1.52570809s: waiting for domain to come up
	I0120 12:30:17.986679  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:17.987151  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:17.987194  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:17.987128  993183 retry.go:31] will retry after 2.077368031s: waiting for domain to come up
	I0120 12:30:20.066015  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:20.066574  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:20.066642  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:20.066538  993183 retry.go:31] will retry after 1.939013647s: waiting for domain to come up
	I0120 12:30:22.007677  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:22.008129  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:22.008159  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:22.008089  993183 retry.go:31] will retry after 2.896453397s: waiting for domain to come up
	I0120 12:30:24.906745  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:24.907339  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | unable to find current IP address of domain default-k8s-diff-port-981597 in network mk-default-k8s-diff-port-981597
	I0120 12:30:24.907378  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | I0120 12:30:24.907312  993183 retry.go:31] will retry after 3.177046482s: waiting for domain to come up
	I0120 12:30:28.088774  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.089358  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) found domain IP: 192.168.39.222
	I0120 12:30:28.089395  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has current primary IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.089405  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) reserving static IP address...
	I0120 12:30:28.089918  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-981597", mac: "52:54:00:a7:4a:e1", ip: "192.168.39.222"} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.089954  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) reserved static IP address 192.168.39.222 for domain default-k8s-diff-port-981597
	I0120 12:30:28.089973  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | skip adding static IP to network mk-default-k8s-diff-port-981597 - found existing host DHCP lease matching {name: "default-k8s-diff-port-981597", mac: "52:54:00:a7:4a:e1", ip: "192.168.39.222"}
	I0120 12:30:28.089980  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) waiting for SSH...
	I0120 12:30:28.089999  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Getting to WaitForSSH function...
	I0120 12:30:28.092398  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.092710  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.092737  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.092835  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Using SSH client type: external
	I0120 12:30:28.092863  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa (-rw-------)
	I0120 12:30:28.092895  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:30:28.092910  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | About to run SSH command:
	I0120 12:30:28.092925  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | exit 0
	I0120 12:30:28.218298  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | SSH cmd err, output: <nil>: 
	I0120 12:30:28.218628  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetConfigRaw
	I0120 12:30:28.219318  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetIP
	I0120 12:30:28.221898  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.222270  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.222298  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.222625  993131 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/config.json ...
	I0120 12:30:28.222806  993131 machine.go:93] provisionDockerMachine start ...
	I0120 12:30:28.222825  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:28.223040  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.225170  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.225490  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.225511  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.225713  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:28.225870  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.226005  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.226113  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:28.226234  993131 main.go:141] libmachine: Using SSH client type: native
	I0120 12:30:28.226488  993131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0120 12:30:28.226503  993131 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:30:28.330371  993131 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:30:28.330405  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetMachineName
	I0120 12:30:28.330636  993131 buildroot.go:166] provisioning hostname "default-k8s-diff-port-981597"
	I0120 12:30:28.330671  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetMachineName
	I0120 12:30:28.330921  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.333667  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.334058  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.334089  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.334246  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:28.334437  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.334649  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.334773  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:28.334937  993131 main.go:141] libmachine: Using SSH client type: native
	I0120 12:30:28.335099  993131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0120 12:30:28.335118  993131 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-981597 && echo "default-k8s-diff-port-981597" | sudo tee /etc/hostname
	I0120 12:30:28.452665  993131 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-981597
	
	I0120 12:30:28.452696  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.455288  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.455587  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.455616  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.455820  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:28.456008  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.456184  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.456334  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:28.456516  993131 main.go:141] libmachine: Using SSH client type: native
	I0120 12:30:28.456686  993131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0120 12:30:28.456703  993131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-981597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-981597/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-981597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:30:28.570771  993131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:30:28.570800  993131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:30:28.570845  993131 buildroot.go:174] setting up certificates
	I0120 12:30:28.570862  993131 provision.go:84] configureAuth start
	I0120 12:30:28.570876  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetMachineName
	I0120 12:30:28.571154  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetIP
	I0120 12:30:28.573513  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.573831  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.573861  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.573976  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.576134  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.576477  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.576518  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.576627  993131 provision.go:143] copyHostCerts
	I0120 12:30:28.576688  993131 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:30:28.576714  993131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:30:28.576794  993131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:30:28.576900  993131 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:30:28.576910  993131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:30:28.576949  993131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:30:28.577026  993131 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:30:28.577036  993131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:30:28.577077  993131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:30:28.577147  993131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-981597 san=[127.0.0.1 192.168.39.222 default-k8s-diff-port-981597 localhost minikube]
	I0120 12:30:28.666720  993131 provision.go:177] copyRemoteCerts
	I0120 12:30:28.666776  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:30:28.666798  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.669453  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.669843  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.669876  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.670024  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:28.670250  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.670435  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:28.670622  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:30:28.751906  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:30:28.774226  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0120 12:30:28.796452  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:30:28.818059  993131 provision.go:87] duration metric: took 247.181484ms to configureAuth
	I0120 12:30:28.818083  993131 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:30:28.818282  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:30:28.818382  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:28.820859  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.821298  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:28.821329  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:28.821533  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:28.821723  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.821897  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:28.822040  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:28.822210  993131 main.go:141] libmachine: Using SSH client type: native
	I0120 12:30:28.822371  993131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0120 12:30:28.822388  993131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:30:29.038392  993131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:30:29.038416  993131 machine.go:96] duration metric: took 815.596961ms to provisionDockerMachine
	I0120 12:30:29.038428  993131 start.go:293] postStartSetup for "default-k8s-diff-port-981597" (driver="kvm2")
	I0120 12:30:29.038439  993131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:30:29.038456  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:29.038789  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:30:29.038821  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:29.041455  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.041780  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:29.041816  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.041948  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:29.042178  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:29.042332  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:29.042468  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:30:29.125407  993131 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:30:29.129607  993131 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:30:29.129633  993131 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:30:29.129705  993131 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:30:29.129802  993131 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:30:29.129940  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:30:29.140042  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:30:29.161830  993131 start.go:296] duration metric: took 123.391025ms for postStartSetup
	I0120 12:30:29.161862  993131 fix.go:56] duration metric: took 19.725864494s for fixHost
	I0120 12:30:29.161881  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:29.164751  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.165090  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:29.165118  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.165291  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:29.165491  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:29.165686  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:29.165878  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:29.166101  993131 main.go:141] libmachine: Using SSH client type: native
	I0120 12:30:29.166307  993131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0120 12:30:29.166322  993131 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:30:29.270945  993131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376229.234930036
	
	I0120 12:30:29.270979  993131 fix.go:216] guest clock: 1737376229.234930036
	I0120 12:30:29.270989  993131 fix.go:229] Guest: 2025-01-20 12:30:29.234930036 +0000 UTC Remote: 2025-01-20 12:30:29.161865691 +0000 UTC m=+19.880958927 (delta=73.064345ms)
	I0120 12:30:29.271010  993131 fix.go:200] guest clock delta is within tolerance: 73.064345ms
	I0120 12:30:29.271018  993131 start.go:83] releasing machines lock for "default-k8s-diff-port-981597", held for 19.835037311s
	I0120 12:30:29.271044  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:29.271298  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetIP
	I0120 12:30:29.273472  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.273910  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:29.273940  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.274101  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:29.274638  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:29.274834  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:30:29.274931  993131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:30:29.274985  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:29.275080  993131 ssh_runner.go:195] Run: cat /version.json
	I0120 12:30:29.275109  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:30:29.277786  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.277920  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.278150  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:29.278178  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.278243  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:29.278271  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:29.278301  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:29.278508  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:29.278541  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:30:29.278717  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:30:29.278724  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:29.278878  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:30:29.278882  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:30:29.278990  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:30:29.384456  993131 ssh_runner.go:195] Run: systemctl --version
	I0120 12:30:29.390036  993131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:30:29.532221  993131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:30:29.538117  993131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:30:29.538192  993131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:30:29.553689  993131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:30:29.553719  993131 start.go:495] detecting cgroup driver to use...
	I0120 12:30:29.553806  993131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:30:29.569134  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:30:29.582276  993131 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:30:29.582332  993131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:30:29.595519  993131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:30:29.608615  993131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:30:29.723957  993131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:30:29.866700  993131 docker.go:233] disabling docker service ...
	I0120 12:30:29.866776  993131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:30:29.881166  993131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:30:29.893035  993131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:30:30.008827  993131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:30:30.125357  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:30:30.138908  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:30:30.155584  993131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:30:30.155653  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.165294  993131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:30:30.165375  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.175218  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.184489  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.193769  993131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:30:30.203394  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.212941  993131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.227865  993131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:30:30.237337  993131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:30:30.245828  993131 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:30:30.245885  993131 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:30:30.258140  993131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:30:30.267109  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:30:30.373355  993131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:30:30.457724  993131 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:30:30.457799  993131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:30:30.462268  993131 start.go:563] Will wait 60s for crictl version
	I0120 12:30:30.462334  993131 ssh_runner.go:195] Run: which crictl
	I0120 12:30:30.466807  993131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:30:30.503055  993131 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:30:30.503138  993131 ssh_runner.go:195] Run: crio --version
	I0120 12:30:30.529393  993131 ssh_runner.go:195] Run: crio --version
	I0120 12:30:30.556720  993131 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:30:30.558050  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetIP
	I0120 12:30:30.560899  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:30.561362  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:30:30.561387  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:30:30.561649  993131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:30:30.565381  993131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:30:30.576629  993131 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-981597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-981
597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:30:30.576765  993131 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:30:30.576820  993131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:30:30.608706  993131 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:30:30.608770  993131 ssh_runner.go:195] Run: which lz4
	I0120 12:30:30.612777  993131 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:30:30.616679  993131 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:30:30.616711  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 12:30:31.911049  993131 crio.go:462] duration metric: took 1.298301363s to copy over tarball
	I0120 12:30:31.911121  993131 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:30:34.025071  993131 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113914987s)
	I0120 12:30:34.025113  993131 crio.go:469] duration metric: took 2.114022744s to extract the tarball
	I0120 12:30:34.025125  993131 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:30:34.064810  993131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:30:34.109971  993131 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:30:34.109996  993131 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:30:34.110006  993131 kubeadm.go:934] updating node { 192.168.39.222 8444 v1.32.0 crio true true} ...
	I0120 12:30:34.110139  993131 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-981597 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-981597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:30:34.110208  993131 ssh_runner.go:195] Run: crio config
	I0120 12:30:34.165828  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:30:34.165851  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:30:34.165863  993131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:30:34.165891  993131 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-981597 NodeName:default-k8s-diff-port-981597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:30:34.166059  993131 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-981597"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:30:34.166156  993131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:30:34.177613  993131 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:30:34.177697  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:30:34.187025  993131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0120 12:30:34.204784  993131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:30:34.220316  993131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 12:30:34.235751  993131 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0120 12:30:34.239104  993131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:30:34.250067  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:30:34.403523  993131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:30:34.421888  993131 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597 for IP: 192.168.39.222
	I0120 12:30:34.421913  993131 certs.go:194] generating shared ca certs ...
	I0120 12:30:34.421937  993131 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:30:34.422116  993131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:30:34.422243  993131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:30:34.422266  993131 certs.go:256] generating profile certs ...
	I0120 12:30:34.422378  993131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.key
	I0120 12:30:34.422465  993131 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/apiserver.key.a6094c5e
	I0120 12:30:34.422553  993131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/proxy-client.key
	I0120 12:30:34.422718  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:30:34.422765  993131 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:30:34.422783  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:30:34.422828  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:30:34.422867  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:30:34.422906  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:30:34.423001  993131 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:30:34.423731  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:30:34.466500  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:30:34.500366  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:30:34.526305  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:30:34.553245  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0120 12:30:34.580834  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:30:34.603597  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:30:34.624874  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:30:34.646597  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:30:34.667978  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:30:34.688520  993131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:30:34.709442  993131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:30:34.724508  993131 ssh_runner.go:195] Run: openssl version
	I0120 12:30:34.729738  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:30:34.739759  993131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:30:34.743656  993131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:30:34.743710  993131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:30:34.748943  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:30:34.758621  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:30:34.768392  993131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:30:34.772609  993131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:30:34.772670  993131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:30:34.777876  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:30:34.788336  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:30:34.798053  993131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:30:34.801887  993131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:30:34.801925  993131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:30:34.806972  993131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:30:34.816642  993131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:30:34.820659  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:30:34.825848  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:30:34.831217  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:30:34.836355  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:30:34.841407  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:30:34.846555  993131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:30:34.852018  993131 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-981597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-981597
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:30:34.852104  993131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:30:34.852140  993131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:30:34.888179  993131 cri.go:89] found id: ""
	I0120 12:30:34.888235  993131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:30:34.897396  993131 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:30:34.897422  993131 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:30:34.897463  993131 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:30:34.906277  993131 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:30:34.907182  993131 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-981597" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:30:34.907465  993131 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-981597" cluster setting kubeconfig missing "default-k8s-diff-port-981597" context setting]
	I0120 12:30:34.907957  993131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:30:34.909213  993131 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:30:34.918939  993131 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.222
	I0120 12:30:34.918974  993131 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:30:34.918988  993131 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:30:34.919035  993131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:30:34.955741  993131 cri.go:89] found id: ""
	I0120 12:30:34.955811  993131 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:30:34.974929  993131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:30:34.985503  993131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:30:34.985521  993131 kubeadm.go:157] found existing configuration files:
	
	I0120 12:30:34.985561  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 12:30:34.995137  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:30:34.995184  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:30:35.005079  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 12:30:35.013763  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:30:35.013819  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:30:35.022102  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 12:30:35.030551  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:30:35.030595  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:30:35.039696  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 12:30:35.047653  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:30:35.047700  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:30:35.056417  993131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:30:35.064911  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:35.175834  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:36.343917  993131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.168036319s)
	I0120 12:30:36.343960  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:36.549943  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:36.610200  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:36.718644  993131 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:30:36.718744  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:37.219066  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:37.719198  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:38.218862  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:38.718966  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:30:38.752531  993131 api_server.go:72] duration metric: took 2.033887812s to wait for apiserver process to appear ...
	I0120 12:30:38.752568  993131 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:30:38.752598  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:38.753146  993131 api_server.go:269] stopped: https://192.168.39.222:8444/healthz: Get "https://192.168.39.222:8444/healthz": dial tcp 192.168.39.222:8444: connect: connection refused
	I0120 12:30:39.252818  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:41.524454  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:30:41.524486  993131 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:30:41.524502  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:41.539605  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 12:30:41.539631  993131 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 12:30:41.752970  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:41.758810  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:30:41.758837  993131 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:30:42.253522  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:42.258628  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 12:30:42.258652  993131 api_server.go:103] status: https://192.168.39.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 12:30:42.753398  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:30:42.775351  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0120 12:30:42.784531  993131 api_server.go:141] control plane version: v1.32.0
	I0120 12:30:42.784568  993131 api_server.go:131] duration metric: took 4.031989825s to wait for apiserver health ...
	I0120 12:30:42.784582  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:30:42.784592  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:30:42.786079  993131 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:30:42.787267  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:30:42.813961  993131 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:30:42.839801  993131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:30:42.852265  993131 system_pods.go:59] 8 kube-system pods found
	I0120 12:30:42.852310  993131 system_pods.go:61] "coredns-668d6bf9bc-25cqm" [776cde94-8e5e-423b-b90d-b096adf05510] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 12:30:42.852323  993131 system_pods.go:61] "etcd-default-k8s-diff-port-981597" [523d2c17-ed01-4c32-b043-2752867c33d2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 12:30:42.852335  993131 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-981597" [ede22888-b193-40c1-a602-5352a3967417] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 12:30:42.852347  993131 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-981597" [084797a9-8f40-4c6e-9329-ebc0596857f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 12:30:42.852358  993131 system_pods.go:61] "kube-proxy-gb8w6" [32e839d4-8869-492c-9ef8-5c8231cd513c] Running
	I0120 12:30:42.852366  993131 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-981597" [bac46167-85ee-4af0-bd1b-cc67e92a6b06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 12:30:42.852374  993131 system_pods.go:61] "metrics-server-f79f97bbb-hb6dm" [9bef6f6b-cb11-4fa5-a29c-061470736899] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:30:42.852385  993131 system_pods.go:61] "storage-provisioner" [404dc16e-7c96-4a2d-93ca-6cc7f4e8ff01] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 12:30:42.852397  993131 system_pods.go:74] duration metric: took 12.574634ms to wait for pod list to return data ...
	I0120 12:30:42.852411  993131 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:30:42.859979  993131 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:30:42.860014  993131 node_conditions.go:123] node cpu capacity is 2
	I0120 12:30:42.860029  993131 node_conditions.go:105] duration metric: took 7.608926ms to run NodePressure ...
	I0120 12:30:42.860051  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:30:43.134198  993131 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 12:30:43.138644  993131 kubeadm.go:739] kubelet initialised
	I0120 12:30:43.138671  993131 kubeadm.go:740] duration metric: took 4.446086ms waiting for restarted kubelet to initialise ...
	I0120 12:30:43.138683  993131 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:30:43.143815  993131 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:45.148980  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:47.149951  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:49.150276  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:51.651896  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:54.150684  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:55.650800  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:55.650825  993131 pod_ready.go:82] duration metric: took 12.506985582s for pod "coredns-668d6bf9bc-25cqm" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:55.650835  993131 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:55.654950  993131 pod_ready.go:93] pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:55.654967  993131 pod_ready.go:82] duration metric: took 4.123668ms for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:55.654976  993131 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.661160  993131 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:57.661182  993131 pod_ready.go:82] duration metric: took 2.006199388s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.661193  993131 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.665497  993131 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:57.665515  993131 pod_ready.go:82] duration metric: took 4.316169ms for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.665524  993131 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gb8w6" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.673829  993131 pod_ready.go:93] pod "kube-proxy-gb8w6" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:57.673845  993131 pod_ready.go:82] duration metric: took 8.315858ms for pod "kube-proxy-gb8w6" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.673853  993131 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.678766  993131 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:30:57.678785  993131 pod_ready.go:82] duration metric: took 4.926306ms for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:57.678794  993131 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" ...
	I0120 12:30:59.685686  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:02.184471  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:04.185146  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:06.185902  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:08.186440  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.684376  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.684889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:14.686169  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.184821  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.684947  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.686415  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:24.185262  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:26.685300  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:29.186578  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:31.684648  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.685881  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:35.687588  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.185847  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.817405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.185212  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.684419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:48.185535  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.684323  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.684538  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.685013  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.186057  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.684870  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.685889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.185105  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.185872  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.683979  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.685405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.184959  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.685252  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.685468  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.184125  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.184670  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.184995  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.684732  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.185287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.685583  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.184568  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.185027  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.684930  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.686049  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:43.188216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:45.685384  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.184725  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:50.685157  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.185189  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:55.684905  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.686081  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.184931  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.684980  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:04.685422  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.184287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.185215  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.685128  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.686705  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:16.184659  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.185053  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.185265  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:22.684404  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.685216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:26.687261  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:29.183496  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:31.184534  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.184696  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.684708  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.685419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:40.184106  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:42.184765  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:44.686233  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:47.185408  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.186465  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:51.190620  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:53.685783  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.686287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.185263  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.685449  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.688229  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:05.185480  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.185630  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:09.185867  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:11.684329  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.686198  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:16.185205  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:18.685055  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:21.184740  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:23.685218  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:25.685681  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.183977  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:30.184978  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:32.185137  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:34.685113  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:36.685852  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.185481  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:41.685860  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:43.685895  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:46.185138  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.185173  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:50.685136  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:53.185766  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:55.685957  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:57.679747  993131 pod_ready.go:82] duration metric: took 4m0.000931966s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:57.679804  993131 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:57.679835  993131 pod_ready.go:39] duration metric: took 4m14.541139208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:57.679882  993131 kubeadm.go:597] duration metric: took 4m22.782450691s to restartPrimaryControlPlane
	W0120 12:34:57.679976  993131 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:57.680017  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:35:25.897831  993131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.217725548s)
	I0120 12:35:25.897928  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.911960  993131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:25.920888  993131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:25.929485  993131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:25.929507  993131 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:25.929555  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 12:35:25.937714  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:25.937770  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:25.946009  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 12:35:25.954472  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:25.954515  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:25.962622  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.970420  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:25.970466  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.978489  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 12:35:25.986579  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:25.986631  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:25.994935  993131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:26.145798  993131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:34.909127  993131 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:34.909216  993131 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:34.909344  993131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:34.909477  993131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:34.909620  993131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:34.909715  993131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:34.911105  993131 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:34.911202  993131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:34.911293  993131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.911398  993131 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:34.911468  993131 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:34.911533  993131 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:34.911590  993131 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:34.911674  993131 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:34.911735  993131 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:34.911828  993131 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:34.911943  993131 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:34.912009  993131 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:34.912100  993131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:34.912190  993131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:34.912286  993131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:34.912332  993131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:34.912438  993131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:34.912528  993131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:34.912635  993131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:34.912726  993131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:34.914123  993131 out.go:235]   - Booting up control plane ...
	I0120 12:35:34.914234  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:34.914348  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:34.914449  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:34.914608  993131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:34.914688  993131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:34.914725  993131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:34.914857  993131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:34.914944  993131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:34.915002  993131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.58459ms
	I0120 12:35:34.915062  993131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:34.915123  993131 kubeadm.go:310] [api-check] The API server is healthy after 5.503412907s
	I0120 12:35:34.915262  993131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:34.915400  993131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:34.915458  993131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:34.915633  993131 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-981597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:34.915681  993131 kubeadm.go:310] [bootstrap-token] Using token: i0tzs5.z567f1ntzr02cqfq
	I0120 12:35:34.916955  993131 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:34.917087  993131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:34.917200  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:34.917374  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:34.917519  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:34.917673  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:34.917794  993131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:34.917950  993131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:34.918013  993131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:34.918074  993131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:34.918083  993131 kubeadm.go:310] 
	I0120 12:35:34.918237  993131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:34.918260  993131 kubeadm.go:310] 
	I0120 12:35:34.918376  993131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:34.918388  993131 kubeadm.go:310] 
	I0120 12:35:34.918425  993131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:34.918506  993131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:34.918601  993131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:34.918613  993131 kubeadm.go:310] 
	I0120 12:35:34.918694  993131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:34.918704  993131 kubeadm.go:310] 
	I0120 12:35:34.918758  993131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:34.918770  993131 kubeadm.go:310] 
	I0120 12:35:34.918843  993131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:34.918947  993131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:34.919045  993131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:34.919057  993131 kubeadm.go:310] 
	I0120 12:35:34.919174  993131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:34.919281  993131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:34.919295  993131 kubeadm.go:310] 
	I0120 12:35:34.919404  993131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919548  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:35:34.919582  993131 kubeadm.go:310] 	--control-plane 
	I0120 12:35:34.919594  993131 kubeadm.go:310] 
	I0120 12:35:34.919711  993131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:34.919723  993131 kubeadm.go:310] 
	I0120 12:35:34.919827  993131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919982  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:35:34.919999  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:35:34.920015  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:35:34.921475  993131 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:35:34.922590  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:35:34.933756  993131 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:35:34.952622  993131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:34.952700  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:34.952763  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-981597 minikube.k8s.io/updated_at=2025_01_20T12_35_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=default-k8s-diff-port-981597 minikube.k8s.io/primary=true
	I0120 12:35:35.145316  993131 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:35.161459  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:35.662404  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.162367  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.662373  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.162163  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.661727  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.161998  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.662452  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.161911  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.336211  993131 kubeadm.go:1113] duration metric: took 4.383561407s to wait for elevateKubeSystemPrivileges
	I0120 12:35:39.336266  993131 kubeadm.go:394] duration metric: took 5m4.484253589s to StartCluster
	I0120 12:35:39.336293  993131 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.336426  993131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:35:39.338834  993131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.339088  993131 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:35:39.339220  993131 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:39.339332  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:35:39.339365  993131 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339391  993131 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-981597"
	I0120 12:35:39.339390  993131 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-981597"
	W0120 12:35:39.339401  993131 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:35:39.339408  993131 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339418  993131 addons.go:247] addon dashboard should already be in state true
	I0120 12:35:39.339411  993131 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339435  993131 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339444  993131 addons.go:247] addon metrics-server should already be in state true
	I0120 12:35:39.339444  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339451  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339474  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339390  993131 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339493  993131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-981597"
	I0120 12:35:39.339824  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339865  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339923  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340012  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.340084  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340125  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.343052  993131 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:39.344268  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:39.360766  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0120 12:35:39.360936  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0120 12:35:39.361027  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0120 12:35:39.361484  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361615  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361686  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361937  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.361959  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362058  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362066  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362167  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362178  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362512  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362592  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362613  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362835  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.363083  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.363147  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.363178  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0120 12:35:39.363870  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.364373  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.364508  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.364871  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.364893  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.365250  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.365757  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.365799  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.366758  993131 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.366781  993131 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:35:39.366816  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.367172  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.367210  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.385700  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0120 12:35:39.386220  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.386752  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.386776  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.387167  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.387430  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.388835  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0120 12:35:39.389074  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0120 12:35:39.389290  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389718  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389796  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.389819  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390265  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.390287  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390316  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.390346  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.390828  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.391044  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.391081  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.392517  993131 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:35:39.392556  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0120 12:35:39.393043  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.393711  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.393715  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.393730  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.394195  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.394747  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.394793  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.395249  993131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:39.395355  993131 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:35:39.395403  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.396870  993131 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.396892  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:39.396914  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.396998  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:35:39.397017  993131 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:35:39.397039  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.399496  993131 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:35:39.400927  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:35:39.400947  993131 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:35:39.400969  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.401577  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401584  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401591  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401608  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401620  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401641  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401644  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401851  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.401948  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.402022  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402053  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402154  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.402468  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.404077  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.406625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.406703  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.406720  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.410708  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.410899  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.411057  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.414646  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0120 12:35:39.415080  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.415539  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.415560  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.415922  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.416132  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.417677  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.417895  993131 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.417909  993131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:39.417927  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.422636  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422665  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.422682  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422694  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.424675  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.424843  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.424988  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.601008  993131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:39.644654  993131 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675702  993131 node_ready.go:49] node "default-k8s-diff-port-981597" has status "Ready":"True"
	I0120 12:35:39.675723  993131 node_ready.go:38] duration metric: took 31.032591ms for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675734  993131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:39.685490  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:39.768195  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:35:39.768218  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:35:39.812873  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:35:39.812897  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:35:39.822881  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.825928  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.846613  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:35:39.846645  993131 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:35:39.883996  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:35:39.884037  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:35:39.935435  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:39.935470  993131 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:35:39.992813  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:35:39.992840  993131 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:35:40.026214  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:40.069154  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:35:40.069190  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:35:40.121948  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:35:40.121983  993131 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:35:40.243520  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:35:40.243553  993131 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:35:40.252481  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252512  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.252849  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.252872  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.252885  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252900  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.253335  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.253397  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.253372  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.257887  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.257903  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.258196  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.258214  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.295226  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:35:40.295255  993131 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:35:40.386270  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:35:40.386304  993131 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:35:40.478877  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.478909  993131 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:35:40.533601  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.863384  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.037420526s)
	I0120 12:35:40.863438  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863447  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.863790  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.863831  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.863841  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.863851  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863864  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.864124  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.864145  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.864150  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.207665  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181404643s)
	I0120 12:35:41.207727  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.207743  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208079  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208098  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208117  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.208126  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208422  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208445  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208445  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.208456  993131 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-981597"
	I0120 12:35:41.719786  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:41.719813  993131 pod_ready.go:82] duration metric: took 2.034287913s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.719823  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.984277  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.450618233s)
	I0120 12:35:41.984341  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984368  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984689  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.984706  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.984718  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984728  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984738  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985071  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985119  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.985138  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.986711  993131 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-981597 addons enable metrics-server
	
	I0120 12:35:41.988326  993131 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:35:41.989523  993131 addons.go:514] duration metric: took 2.650315965s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:35:43.726169  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:45.813799  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.227053  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.729367  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.729409  993131 pod_ready.go:82] duration metric: took 7.009577783s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.729423  993131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735596  993131 pod_ready.go:93] pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.735621  993131 pod_ready.go:82] duration metric: took 6.188248ms for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735635  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748236  993131 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.748262  993131 pod_ready.go:82] duration metric: took 12.618834ms for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748275  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758672  993131 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.758703  993131 pod_ready.go:82] duration metric: took 10.418952ms for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758717  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766403  993131 pod_ready.go:93] pod "kube-proxy-sn66t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.766423  993131 pod_ready.go:82] duration metric: took 7.698237ms for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766433  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124688  993131 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:49.124714  993131 pod_ready.go:82] duration metric: took 358.274237ms for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124723  993131 pod_ready.go:39] duration metric: took 9.44898025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:49.124740  993131 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:49.124803  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:49.172406  993131 api_server.go:72] duration metric: took 9.833266884s to wait for apiserver process to appear ...
	I0120 12:35:49.172434  993131 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:49.172459  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:35:49.177280  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0120 12:35:49.178469  993131 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:49.178498  993131 api_server.go:131] duration metric: took 6.05652ms to wait for apiserver health ...
	I0120 12:35:49.178508  993131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:49.328775  993131 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:49.328811  993131 system_pods.go:61] "coredns-668d6bf9bc-cn8tc" [19a18120-8f3f-45bd-92f3-c291423f4895] Running
	I0120 12:35:49.328819  993131 system_pods.go:61] "coredns-668d6bf9bc-g9m4p" [9e3e4568-92ab-4ee5-b10a-5489b72248d6] Running
	I0120 12:35:49.328825  993131 system_pods.go:61] "etcd-default-k8s-diff-port-981597" [82f73dcc-1328-428e-8eb7-550c9b2d2b22] Running
	I0120 12:35:49.328831  993131 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-981597" [ff2d67bb-7ff8-44ac-a043-b6f423339fc7] Running
	I0120 12:35:49.328837  993131 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-981597" [fa91d7b8-200d-464f-b2b0-3a08a4f435d8] Running
	I0120 12:35:49.328842  993131 system_pods.go:61] "kube-proxy-sn66t" [a90855a0-c87a-4b55-bd0e-4b95b062479d] Running
	I0120 12:35:49.328847  993131 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-981597" [26bb9f8b-4e05-4cb9-a863-75d6a6a6b652] Running
	I0120 12:35:49.328856  993131 system_pods.go:61] "metrics-server-f79f97bbb-xkrxx" [cf78f231-b1e0-4566-817b-bfb9b8dac3f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:35:49.328862  993131 system_pods.go:61] "storage-provisioner" [e77b12e8-25f3-43ad-8588-2716dd4ccbd1] Running
	I0120 12:35:49.328876  993131 system_pods.go:74] duration metric: took 150.359796ms to wait for pod list to return data ...
	I0120 12:35:49.328889  993131 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:49.619916  993131 default_sa.go:45] found service account: "default"
	I0120 12:35:49.619954  993131 default_sa.go:55] duration metric: took 291.056324ms for default service account to be created ...
	I0120 12:35:49.619967  993131 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:49.728886  993131 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-981597 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-981597 -n default-k8s-diff-port-981597
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-981597 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-981597 logs -n 25: (1.269805968s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | status kubelet --all --full                          |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo journalctl                       | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo docker                           | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo                                  | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo cat                              | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo containerd                       | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo systemctl                        | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo find                             | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-816069 sudo crio                             | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-816069                                       | auto-816069           | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC | 20 Jan 25 12:56 UTC |
	| start   | -p custom-flannel-816069                             | custom-flannel-816069 | jenkins | v1.35.0 | 20 Jan 25 12:56 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:56:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:56:53.179804 1002509 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:56:53.179912 1002509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:56:53.179922 1002509 out.go:358] Setting ErrFile to fd 2...
	I0120 12:56:53.179929 1002509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:56:53.180123 1002509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:56:53.180782 1002509 out.go:352] Setting JSON to false
	I0120 12:56:53.181960 1002509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20356,"bootTime":1737357457,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:56:53.182053 1002509 start.go:139] virtualization: kvm guest
	I0120 12:56:53.184216 1002509 out.go:177] * [custom-flannel-816069] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:56:53.185410 1002509 notify.go:220] Checking for updates...
	I0120 12:56:53.185427 1002509 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:56:53.186501 1002509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:56:53.187607 1002509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:56:53.188740 1002509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:56:53.189935 1002509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:56:53.191024 1002509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:56:48.314439 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:48.314922 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:48.314955 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:48.314886 1001175 retry.go:31] will retry after 1.159321837s: waiting for domain to come up
	I0120 12:56:49.475426 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:49.475892 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:49.475922 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:49.475854 1001175 retry.go:31] will retry after 1.120165374s: waiting for domain to come up
	I0120 12:56:50.598001 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:50.598459 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:50.598492 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:50.598428 1001175 retry.go:31] will retry after 1.294018241s: waiting for domain to come up
	I0120 12:56:51.893553 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:51.893998 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:51.894018 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:51.893978 1001175 retry.go:31] will retry after 2.204772025s: waiting for domain to come up
	I0120 12:56:53.192575 1002509 config.go:182] Loaded profile config "calico-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:53.192741 1002509 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:53.192872 1002509 config.go:182] Loaded profile config "kindnet-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:56:53.193000 1002509 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:56:53.229938 1002509 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:56:53.230897 1002509 start.go:297] selected driver: kvm2
	I0120 12:56:53.230913 1002509 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:56:53.230924 1002509 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:56:53.231620 1002509 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:56:53.231719 1002509 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:56:53.246690 1002509 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:56:53.246739 1002509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:56:53.247039 1002509 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:56:53.247085 1002509 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 12:56:53.247114 1002509 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0120 12:56:53.247182 1002509 start.go:340] cluster config:
	{Name:custom-flannel-816069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-816069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:56:53.247302 1002509 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:56:53.248860 1002509 out.go:177] * Starting "custom-flannel-816069" primary control-plane node in "custom-flannel-816069" cluster
	I0120 12:56:53.250016 1002509 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:56:53.250053 1002509 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:56:53.250079 1002509 cache.go:56] Caching tarball of preloaded images
	I0120 12:56:53.250200 1002509 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:56:53.250213 1002509 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:56:53.250295 1002509 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/custom-flannel-816069/config.json ...
	I0120 12:56:53.250313 1002509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/custom-flannel-816069/config.json: {Name:mkdc0f89b038e5562728e3ed723ddc95f2727bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:56:53.250463 1002509 start.go:360] acquireMachinesLock for custom-flannel-816069: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:56:54.100079 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:54.100837 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:54.100868 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:54.100782 1001175 retry.go:31] will retry after 2.058014513s: waiting for domain to come up
	I0120 12:56:56.160360 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:56.160875 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:56.160928 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:56.160836 1001175 retry.go:31] will retry after 3.533749745s: waiting for domain to come up
	I0120 12:56:59.696609 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:56:59.697107 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:56:59.697152 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:56:59.697092 1001175 retry.go:31] will retry after 2.919697786s: waiting for domain to come up
	I0120 12:57:02.619592 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:02.620147 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find current IP address of domain kindnet-816069 in network mk-kindnet-816069
	I0120 12:57:02.620202 1001113 main.go:141] libmachine: (kindnet-816069) DBG | I0120 12:57:02.620145 1001175 retry.go:31] will retry after 3.415081193s: waiting for domain to come up
	I0120 12:57:07.450507 1001288 start.go:364] duration metric: took 23.162916808s to acquireMachinesLock for "calico-816069"
	I0120 12:57:07.450610 1001288 start.go:93] Provisioning new machine with config: &{Name:calico-816069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-816069 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:57:07.450734 1001288 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:57:06.038286 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.039062 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has current primary IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.039091 1001113 main.go:141] libmachine: (kindnet-816069) found domain IP: 192.168.50.105
	I0120 12:57:06.039140 1001113 main.go:141] libmachine: (kindnet-816069) reserving static IP address...
	I0120 12:57:06.039915 1001113 main.go:141] libmachine: (kindnet-816069) DBG | unable to find host DHCP lease matching {name: "kindnet-816069", mac: "52:54:00:f5:57:d7", ip: "192.168.50.105"} in network mk-kindnet-816069
	I0120 12:57:06.115199 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Getting to WaitForSSH function...
	I0120 12:57:06.115341 1001113 main.go:141] libmachine: (kindnet-816069) reserved static IP address 192.168.50.105 for domain kindnet-816069
	I0120 12:57:06.115362 1001113 main.go:141] libmachine: (kindnet-816069) waiting for SSH...
	I0120 12:57:06.118096 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.118559 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.118591 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.118738 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Using SSH client type: external
	I0120 12:57:06.118812 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa (-rw-------)
	I0120 12:57:06.118861 1001113 main.go:141] libmachine: (kindnet-816069) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:57:06.118884 1001113 main.go:141] libmachine: (kindnet-816069) DBG | About to run SSH command:
	I0120 12:57:06.118900 1001113 main.go:141] libmachine: (kindnet-816069) DBG | exit 0
	I0120 12:57:06.250150 1001113 main.go:141] libmachine: (kindnet-816069) DBG | SSH cmd err, output: <nil>: 
	I0120 12:57:06.250492 1001113 main.go:141] libmachine: (kindnet-816069) KVM machine creation complete
	I0120 12:57:06.250772 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetConfigRaw
	I0120 12:57:06.251384 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:06.251596 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:06.251749 1001113 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:57:06.251767 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetState
	I0120 12:57:06.253076 1001113 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:57:06.253089 1001113 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:57:06.253094 1001113 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:57:06.253100 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.255528 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.255875 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.255906 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.256023 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.256218 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.256376 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.256514 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.256655 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:06.256942 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:06.256962 1001113 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:57:06.365414 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:57:06.365436 1001113 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:57:06.365443 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.368381 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.368766 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.368792 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.368923 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.369105 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.369256 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.369376 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.369508 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:06.369689 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:06.369703 1001113 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:57:06.478683 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:57:06.478749 1001113 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:57:06.478756 1001113 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:57:06.478763 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetMachineName
	I0120 12:57:06.478966 1001113 buildroot.go:166] provisioning hostname "kindnet-816069"
	I0120 12:57:06.478990 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetMachineName
	I0120 12:57:06.479204 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.481870 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.482375 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.482405 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.482562 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.482770 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.482961 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.483119 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.483310 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:06.483488 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:06.483500 1001113 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-816069 && echo "kindnet-816069" | sudo tee /etc/hostname
	I0120 12:57:06.608116 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-816069
	
	I0120 12:57:06.608162 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.611021 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.611426 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.611450 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.611667 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.611864 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.612040 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.612201 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.612335 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:06.612538 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:06.612555 1001113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-816069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-816069/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-816069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:57:06.725863 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:57:06.725899 1001113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:57:06.725943 1001113 buildroot.go:174] setting up certificates
	I0120 12:57:06.725954 1001113 provision.go:84] configureAuth start
	I0120 12:57:06.725967 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetMachineName
	I0120 12:57:06.726225 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetIP
	I0120 12:57:06.728994 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.729423 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.729457 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.729597 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.732004 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.732282 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.732309 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.732418 1001113 provision.go:143] copyHostCerts
	I0120 12:57:06.732475 1001113 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:57:06.732489 1001113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:57:06.732574 1001113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:57:06.732683 1001113 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:57:06.732715 1001113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:57:06.732759 1001113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:57:06.732883 1001113 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:57:06.732897 1001113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:57:06.732936 1001113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:57:06.733010 1001113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.kindnet-816069 san=[127.0.0.1 192.168.50.105 kindnet-816069 localhost minikube]
	I0120 12:57:06.831665 1001113 provision.go:177] copyRemoteCerts
	I0120 12:57:06.831717 1001113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:57:06.831736 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.834500 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.834877 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.834906 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.835062 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.835268 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.835396 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.835591 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:06.919918 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:57:06.944624 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0120 12:57:06.966949 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:57:06.987491 1001113 provision.go:87] duration metric: took 261.522774ms to configureAuth
	I0120 12:57:06.987514 1001113 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:57:06.987687 1001113 config.go:182] Loaded profile config "kindnet-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:57:06.987775 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:06.990416 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.990740 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:06.990771 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:06.990991 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:06.991222 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.991373 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:06.991525 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:06.991674 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:06.991829 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:06.991843 1001113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:57:07.209013 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:57:07.209046 1001113 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:57:07.209057 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetURL
	I0120 12:57:07.210417 1001113 main.go:141] libmachine: (kindnet-816069) DBG | using libvirt version 6000000
	I0120 12:57:07.213323 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.213724 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.213753 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.213913 1001113 main.go:141] libmachine: Docker is up and running!
	I0120 12:57:07.213923 1001113 main.go:141] libmachine: Reticulating splines...
	I0120 12:57:07.213931 1001113 client.go:171] duration metric: took 23.901445411s to LocalClient.Create
	I0120 12:57:07.213960 1001113 start.go:167] duration metric: took 23.901516702s to libmachine.API.Create "kindnet-816069"
	I0120 12:57:07.213975 1001113 start.go:293] postStartSetup for "kindnet-816069" (driver="kvm2")
	I0120 12:57:07.213985 1001113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:57:07.214003 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:07.214255 1001113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:57:07.214291 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:07.216585 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.216952 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.216973 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.217120 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:07.217294 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:07.217440 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:07.217614 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:07.300242 1001113 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:57:07.304199 1001113 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:57:07.304220 1001113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:57:07.304290 1001113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:57:07.304397 1001113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:57:07.304521 1001113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:57:07.313105 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:57:07.336129 1001113 start.go:296] duration metric: took 122.143164ms for postStartSetup
	I0120 12:57:07.336174 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetConfigRaw
	I0120 12:57:07.336785 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetIP
	I0120 12:57:07.339182 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.339517 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.339544 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.339767 1001113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/config.json ...
	I0120 12:57:07.339927 1001113 start.go:128] duration metric: took 24.054055613s to createHost
	I0120 12:57:07.339948 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:07.342179 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.342477 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.342510 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.342626 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:07.342813 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:07.342974 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:07.343138 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:07.343306 1001113 main.go:141] libmachine: Using SSH client type: native
	I0120 12:57:07.343530 1001113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0120 12:57:07.343545 1001113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:57:07.450359 1001113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737377827.416219860
	
	I0120 12:57:07.450381 1001113 fix.go:216] guest clock: 1737377827.416219860
	I0120 12:57:07.450388 1001113 fix.go:229] Guest: 2025-01-20 12:57:07.41621986 +0000 UTC Remote: 2025-01-20 12:57:07.339937662 +0000 UTC m=+24.186810143 (delta=76.282198ms)
	I0120 12:57:07.450410 1001113 fix.go:200] guest clock delta is within tolerance: 76.282198ms
	I0120 12:57:07.450417 1001113 start.go:83] releasing machines lock for "kindnet-816069", held for 24.164613399s
	I0120 12:57:07.450449 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:07.450724 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetIP
	I0120 12:57:07.453507 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.453867 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.453897 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.454106 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:07.454921 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:07.455125 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:07.455179 1001113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:57:07.455236 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:07.455346 1001113 ssh_runner.go:195] Run: cat /version.json
	I0120 12:57:07.455382 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:07.458045 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.458072 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.458413 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.458446 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.458509 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:07.458553 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:07.458601 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:07.458778 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:07.458950 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:07.458978 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:07.459093 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:07.459156 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:07.459238 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:07.459309 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:07.539353 1001113 ssh_runner.go:195] Run: systemctl --version
	I0120 12:57:07.572124 1001113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:57:07.733738 1001113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:57:07.739411 1001113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:57:07.739491 1001113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:57:07.754374 1001113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:57:07.754399 1001113 start.go:495] detecting cgroup driver to use...
	I0120 12:57:07.754467 1001113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:57:07.770420 1001113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:57:07.784165 1001113 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:57:07.784230 1001113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:57:07.797588 1001113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:57:07.815088 1001113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:57:07.932672 1001113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:57:08.072760 1001113 docker.go:233] disabling docker service ...
	I0120 12:57:08.072843 1001113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:57:08.086103 1001113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:57:08.098869 1001113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:57:08.243223 1001113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:57:08.368650 1001113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:57:08.382329 1001113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:57:08.400350 1001113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:57:08.400407 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.409749 1001113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:57:08.409804 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.419296 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.433944 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.448559 1001113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:57:08.459602 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.469802 1001113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.485694 1001113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:57:08.495002 1001113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:57:08.504565 1001113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:57:08.504611 1001113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:57:08.517670 1001113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:57:08.527495 1001113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:57:08.646406 1001113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:57:08.731886 1001113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:57:08.731966 1001113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:57:08.736305 1001113 start.go:563] Will wait 60s for crictl version
	I0120 12:57:08.736369 1001113 ssh_runner.go:195] Run: which crictl
	I0120 12:57:08.740586 1001113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:57:08.787889 1001113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:57:08.787985 1001113 ssh_runner.go:195] Run: crio --version
	I0120 12:57:08.817993 1001113 ssh_runner.go:195] Run: crio --version
	I0120 12:57:08.847843 1001113 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:57:07.452689 1001288 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 12:57:07.452900 1001288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:07.452947 1001288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:07.469571 1001288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0120 12:57:07.470045 1001288 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:07.470670 1001288 main.go:141] libmachine: Using API Version  1
	I0120 12:57:07.470693 1001288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:07.471081 1001288 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:07.471289 1001288 main.go:141] libmachine: (calico-816069) Calling .GetMachineName
	I0120 12:57:07.471439 1001288 main.go:141] libmachine: (calico-816069) Calling .DriverName
	I0120 12:57:07.471593 1001288 start.go:159] libmachine.API.Create for "calico-816069" (driver="kvm2")
	I0120 12:57:07.471621 1001288 client.go:168] LocalClient.Create starting
	I0120 12:57:07.471651 1001288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem
	I0120 12:57:07.471687 1001288 main.go:141] libmachine: Decoding PEM data...
	I0120 12:57:07.471700 1001288 main.go:141] libmachine: Parsing certificate...
	I0120 12:57:07.471751 1001288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem
	I0120 12:57:07.471769 1001288 main.go:141] libmachine: Decoding PEM data...
	I0120 12:57:07.471779 1001288 main.go:141] libmachine: Parsing certificate...
	I0120 12:57:07.471792 1001288 main.go:141] libmachine: Running pre-create checks...
	I0120 12:57:07.471801 1001288 main.go:141] libmachine: (calico-816069) Calling .PreCreateCheck
	I0120 12:57:07.472125 1001288 main.go:141] libmachine: (calico-816069) Calling .GetConfigRaw
	I0120 12:57:07.472548 1001288 main.go:141] libmachine: Creating machine...
	I0120 12:57:07.472565 1001288 main.go:141] libmachine: (calico-816069) Calling .Create
	I0120 12:57:07.472707 1001288 main.go:141] libmachine: (calico-816069) creating KVM machine...
	I0120 12:57:07.472729 1001288 main.go:141] libmachine: (calico-816069) creating network...
	I0120 12:57:07.473791 1001288 main.go:141] libmachine: (calico-816069) DBG | found existing default KVM network
	I0120 12:57:07.474971 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.474835 1002621 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dd:36:f0} reservation:<nil>}
	I0120 12:57:07.475760 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.475674 1002621 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8c:bd:4d} reservation:<nil>}
	I0120 12:57:07.476747 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.476663 1002621 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003f6880}
	I0120 12:57:07.476773 1001288 main.go:141] libmachine: (calico-816069) DBG | created network xml: 
	I0120 12:57:07.476783 1001288 main.go:141] libmachine: (calico-816069) DBG | <network>
	I0120 12:57:07.476791 1001288 main.go:141] libmachine: (calico-816069) DBG |   <name>mk-calico-816069</name>
	I0120 12:57:07.476800 1001288 main.go:141] libmachine: (calico-816069) DBG |   <dns enable='no'/>
	I0120 12:57:07.476806 1001288 main.go:141] libmachine: (calico-816069) DBG |   
	I0120 12:57:07.476820 1001288 main.go:141] libmachine: (calico-816069) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 12:57:07.476833 1001288 main.go:141] libmachine: (calico-816069) DBG |     <dhcp>
	I0120 12:57:07.476843 1001288 main.go:141] libmachine: (calico-816069) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 12:57:07.476850 1001288 main.go:141] libmachine: (calico-816069) DBG |     </dhcp>
	I0120 12:57:07.476876 1001288 main.go:141] libmachine: (calico-816069) DBG |   </ip>
	I0120 12:57:07.476897 1001288 main.go:141] libmachine: (calico-816069) DBG |   
	I0120 12:57:07.476908 1001288 main.go:141] libmachine: (calico-816069) DBG | </network>
	I0120 12:57:07.476918 1001288 main.go:141] libmachine: (calico-816069) DBG | 
	I0120 12:57:07.481779 1001288 main.go:141] libmachine: (calico-816069) DBG | trying to create private KVM network mk-calico-816069 192.168.61.0/24...
	I0120 12:57:07.553493 1001288 main.go:141] libmachine: (calico-816069) DBG | private KVM network mk-calico-816069 192.168.61.0/24 created
	I0120 12:57:07.553521 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.553452 1002621 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:57:07.553536 1001288 main.go:141] libmachine: (calico-816069) setting up store path in /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069 ...
	I0120 12:57:07.553561 1001288 main.go:141] libmachine: (calico-816069) building disk image from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:57:07.553603 1001288 main.go:141] libmachine: (calico-816069) Downloading /home/jenkins/minikube-integration/20151-942401/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:57:07.838801 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.838665 1002621 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069/id_rsa...
	I0120 12:57:07.958773 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.958661 1002621 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069/calico-816069.rawdisk...
	I0120 12:57:07.958800 1001288 main.go:141] libmachine: (calico-816069) DBG | Writing magic tar header
	I0120 12:57:07.958810 1001288 main.go:141] libmachine: (calico-816069) DBG | Writing SSH key tar header
	I0120 12:57:07.958915 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:07.958824 1002621 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069 ...
	I0120 12:57:07.958953 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069
	I0120 12:57:07.958999 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube/machines
	I0120 12:57:07.959021 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069 (perms=drwx------)
	I0120 12:57:07.959037 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:57:07.959050 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-942401
	I0120 12:57:07.959063 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:57:07.959075 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins/minikube-integration/20151-942401/.minikube (perms=drwxr-xr-x)
	I0120 12:57:07.959087 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins/minikube-integration/20151-942401 (perms=drwxrwxr-x)
	I0120 12:57:07.959107 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:57:07.959119 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:57:07.959132 1001288 main.go:141] libmachine: (calico-816069) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:57:07.959141 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home/jenkins
	I0120 12:57:07.959152 1001288 main.go:141] libmachine: (calico-816069) DBG | checking permissions on dir: /home
	I0120 12:57:07.959162 1001288 main.go:141] libmachine: (calico-816069) creating domain...
	I0120 12:57:07.959169 1001288 main.go:141] libmachine: (calico-816069) DBG | skipping /home - not owner
	I0120 12:57:07.960215 1001288 main.go:141] libmachine: (calico-816069) define libvirt domain using xml: 
	I0120 12:57:07.960233 1001288 main.go:141] libmachine: (calico-816069) <domain type='kvm'>
	I0120 12:57:07.960240 1001288 main.go:141] libmachine: (calico-816069)   <name>calico-816069</name>
	I0120 12:57:07.960244 1001288 main.go:141] libmachine: (calico-816069)   <memory unit='MiB'>3072</memory>
	I0120 12:57:07.960249 1001288 main.go:141] libmachine: (calico-816069)   <vcpu>2</vcpu>
	I0120 12:57:07.960253 1001288 main.go:141] libmachine: (calico-816069)   <features>
	I0120 12:57:07.960264 1001288 main.go:141] libmachine: (calico-816069)     <acpi/>
	I0120 12:57:07.960275 1001288 main.go:141] libmachine: (calico-816069)     <apic/>
	I0120 12:57:07.960282 1001288 main.go:141] libmachine: (calico-816069)     <pae/>
	I0120 12:57:07.960290 1001288 main.go:141] libmachine: (calico-816069)     
	I0120 12:57:07.960319 1001288 main.go:141] libmachine: (calico-816069)   </features>
	I0120 12:57:07.960344 1001288 main.go:141] libmachine: (calico-816069)   <cpu mode='host-passthrough'>
	I0120 12:57:07.960368 1001288 main.go:141] libmachine: (calico-816069)   
	I0120 12:57:07.960386 1001288 main.go:141] libmachine: (calico-816069)   </cpu>
	I0120 12:57:07.960398 1001288 main.go:141] libmachine: (calico-816069)   <os>
	I0120 12:57:07.960409 1001288 main.go:141] libmachine: (calico-816069)     <type>hvm</type>
	I0120 12:57:07.960417 1001288 main.go:141] libmachine: (calico-816069)     <boot dev='cdrom'/>
	I0120 12:57:07.960423 1001288 main.go:141] libmachine: (calico-816069)     <boot dev='hd'/>
	I0120 12:57:07.960428 1001288 main.go:141] libmachine: (calico-816069)     <bootmenu enable='no'/>
	I0120 12:57:07.960435 1001288 main.go:141] libmachine: (calico-816069)   </os>
	I0120 12:57:07.960439 1001288 main.go:141] libmachine: (calico-816069)   <devices>
	I0120 12:57:07.960450 1001288 main.go:141] libmachine: (calico-816069)     <disk type='file' device='cdrom'>
	I0120 12:57:07.960465 1001288 main.go:141] libmachine: (calico-816069)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069/boot2docker.iso'/>
	I0120 12:57:07.960479 1001288 main.go:141] libmachine: (calico-816069)       <target dev='hdc' bus='scsi'/>
	I0120 12:57:07.960491 1001288 main.go:141] libmachine: (calico-816069)       <readonly/>
	I0120 12:57:07.960501 1001288 main.go:141] libmachine: (calico-816069)     </disk>
	I0120 12:57:07.960510 1001288 main.go:141] libmachine: (calico-816069)     <disk type='file' device='disk'>
	I0120 12:57:07.960519 1001288 main.go:141] libmachine: (calico-816069)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:57:07.960526 1001288 main.go:141] libmachine: (calico-816069)       <source file='/home/jenkins/minikube-integration/20151-942401/.minikube/machines/calico-816069/calico-816069.rawdisk'/>
	I0120 12:57:07.960534 1001288 main.go:141] libmachine: (calico-816069)       <target dev='hda' bus='virtio'/>
	I0120 12:57:07.960545 1001288 main.go:141] libmachine: (calico-816069)     </disk>
	I0120 12:57:07.960560 1001288 main.go:141] libmachine: (calico-816069)     <interface type='network'>
	I0120 12:57:07.960570 1001288 main.go:141] libmachine: (calico-816069)       <source network='mk-calico-816069'/>
	I0120 12:57:07.960581 1001288 main.go:141] libmachine: (calico-816069)       <model type='virtio'/>
	I0120 12:57:07.960588 1001288 main.go:141] libmachine: (calico-816069)     </interface>
	I0120 12:57:07.960598 1001288 main.go:141] libmachine: (calico-816069)     <interface type='network'>
	I0120 12:57:07.960608 1001288 main.go:141] libmachine: (calico-816069)       <source network='default'/>
	I0120 12:57:07.960618 1001288 main.go:141] libmachine: (calico-816069)       <model type='virtio'/>
	I0120 12:57:07.960627 1001288 main.go:141] libmachine: (calico-816069)     </interface>
	I0120 12:57:07.960640 1001288 main.go:141] libmachine: (calico-816069)     <serial type='pty'>
	I0120 12:57:07.960651 1001288 main.go:141] libmachine: (calico-816069)       <target port='0'/>
	I0120 12:57:07.960659 1001288 main.go:141] libmachine: (calico-816069)     </serial>
	I0120 12:57:07.960672 1001288 main.go:141] libmachine: (calico-816069)     <console type='pty'>
	I0120 12:57:07.960683 1001288 main.go:141] libmachine: (calico-816069)       <target type='serial' port='0'/>
	I0120 12:57:07.960693 1001288 main.go:141] libmachine: (calico-816069)     </console>
	I0120 12:57:07.960705 1001288 main.go:141] libmachine: (calico-816069)     <rng model='virtio'>
	I0120 12:57:07.960724 1001288 main.go:141] libmachine: (calico-816069)       <backend model='random'>/dev/random</backend>
	I0120 12:57:07.960736 1001288 main.go:141] libmachine: (calico-816069)     </rng>
	I0120 12:57:07.960749 1001288 main.go:141] libmachine: (calico-816069)     
	I0120 12:57:07.960784 1001288 main.go:141] libmachine: (calico-816069)     
	I0120 12:57:07.960808 1001288 main.go:141] libmachine: (calico-816069)   </devices>
	I0120 12:57:07.960818 1001288 main.go:141] libmachine: (calico-816069) </domain>
	I0120 12:57:07.960833 1001288 main.go:141] libmachine: (calico-816069) 
	I0120 12:57:07.965224 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:e1:17:50 in network default
	I0120 12:57:07.966046 1001288 main.go:141] libmachine: (calico-816069) starting domain...
	I0120 12:57:07.966069 1001288 main.go:141] libmachine: (calico-816069) ensuring networks are active...
	I0120 12:57:07.966081 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:07.966860 1001288 main.go:141] libmachine: (calico-816069) Ensuring network default is active
	I0120 12:57:07.967232 1001288 main.go:141] libmachine: (calico-816069) Ensuring network mk-calico-816069 is active
	I0120 12:57:07.967797 1001288 main.go:141] libmachine: (calico-816069) getting domain XML...
	I0120 12:57:07.968631 1001288 main.go:141] libmachine: (calico-816069) creating domain...
	I0120 12:57:08.849182 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetIP
	I0120 12:57:08.852382 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:08.852909 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:08.852937 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:08.853221 1001113 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:57:08.857020 1001113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:57:08.868515 1001113 kubeadm.go:883] updating cluster {Name:kindnet-816069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-816069 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:57:08.868637 1001113 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:57:08.868700 1001113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:57:08.904104 1001113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:57:08.904177 1001113 ssh_runner.go:195] Run: which lz4
	I0120 12:57:08.908042 1001113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:57:08.911983 1001113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:57:08.912011 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 12:57:10.217225 1001113 crio.go:462] duration metric: took 1.309213838s to copy over tarball
	I0120 12:57:10.217319 1001113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:57:12.518404 1001113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301046546s)
	I0120 12:57:12.518450 1001113 crio.go:469] duration metric: took 2.301194279s to extract the tarball
	I0120 12:57:12.518461 1001113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:57:12.557570 1001113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:57:12.599729 1001113 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:57:12.599758 1001113 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:57:12.599769 1001113 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.32.0 crio true true} ...
	I0120 12:57:12.599897 1001113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-816069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:kindnet-816069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0120 12:57:12.599987 1001113 ssh_runner.go:195] Run: crio config
	I0120 12:57:12.642263 1001113 cni.go:84] Creating CNI manager for "kindnet"
	I0120 12:57:12.642294 1001113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:57:12.642320 1001113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-816069 NodeName:kindnet-816069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:57:12.642562 1001113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-816069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:57:12.642645 1001113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:57:12.653298 1001113 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:57:12.653362 1001113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:57:12.662515 1001113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0120 12:57:12.682587 1001113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:57:12.698405 1001113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 12:57:12.715057 1001113 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0120 12:57:12.718661 1001113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:57:12.731010 1001113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:57:12.857522 1001113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:57:12.874040 1001113 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069 for IP: 192.168.50.105
	I0120 12:57:12.874069 1001113 certs.go:194] generating shared ca certs ...
	I0120 12:57:12.874105 1001113 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:12.874339 1001113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:57:12.874409 1001113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:57:12.874425 1001113 certs.go:256] generating profile certs ...
	I0120 12:57:12.874513 1001113 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.key
	I0120 12:57:12.874572 1001113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.crt with IP's: []
	I0120 12:57:12.952153 1001113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.crt ...
	I0120 12:57:12.952189 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.crt: {Name:mk317d665cb29db3fea4bbe1a09441e4ecd5bdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:12.952401 1001113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.key ...
	I0120 12:57:12.952421 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/client.key: {Name:mkb61131c955736c0d5f03ba9c7a816d1805a311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:12.952545 1001113 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key.e5d262da
	I0120 12:57:12.952564 1001113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt.e5d262da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.105]
	I0120 12:57:13.151967 1001113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt.e5d262da ...
	I0120 12:57:13.151998 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt.e5d262da: {Name:mka99b76a56db56ec3f8df1754f4fdded72afab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:13.152208 1001113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key.e5d262da ...
	I0120 12:57:13.152228 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key.e5d262da: {Name:mka54e9af763a8f5e6849c1497f64e5bb0bef8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:13.152339 1001113 certs.go:381] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt.e5d262da -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt
	I0120 12:57:13.152456 1001113 certs.go:385] copying /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key.e5d262da -> /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key
	I0120 12:57:13.152550 1001113 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.key
	I0120 12:57:13.152579 1001113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.crt with IP's: []
	I0120 12:57:13.360777 1001113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.crt ...
	I0120 12:57:13.360809 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.crt: {Name:mk6569b3d4ee0424508ed19145cdac8983d49825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:13.361008 1001113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.key ...
	I0120 12:57:13.361029 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.key: {Name:mk5d415c6d825fff8911fc7e0252400a8d1da623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:13.361276 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:57:13.361325 1001113 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:57:13.361343 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:57:13.361500 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:57:13.361542 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:57:13.361579 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:57:13.361643 1001113 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:57:13.362279 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:57:13.386840 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:57:13.410334 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:57:13.433076 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:57:13.454793 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 12:57:13.475307 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:57:13.495918 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:57:13.518394 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/kindnet-816069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:57:13.539772 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:57:13.562879 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:57:13.588988 1001113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:57:13.613450 1001113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:57:13.628348 1001113 ssh_runner.go:195] Run: openssl version
	I0120 12:57:13.633651 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:57:13.643260 1001113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:57:13.647197 1001113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:57:13.647244 1001113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:57:13.652422 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:57:13.662088 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:57:13.671913 1001113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:57:13.675824 1001113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:57:13.675877 1001113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:57:13.680908 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:57:13.691826 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:57:13.702672 1001113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:57:13.706753 1001113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:57:13.706800 1001113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:57:13.712303 1001113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:57:13.724879 1001113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:57:13.728881 1001113 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:57:13.728952 1001113 kubeadm.go:392] StartCluster: {Name:kindnet-816069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-816069 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:57:13.729070 1001113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:57:13.729116 1001113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:57:13.761487 1001113 cri.go:89] found id: ""
	I0120 12:57:13.761559 1001113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:57:13.770399 1001113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:57:13.779021 1001113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:57:13.789290 1001113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:57:13.789309 1001113 kubeadm.go:157] found existing configuration files:
	
	I0120 12:57:13.789353 1001113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:57:13.798220 1001113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:57:13.798279 1001113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:57:13.807703 1001113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:57:13.816609 1001113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:57:13.816667 1001113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:57:13.825995 1001113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:57:13.834494 1001113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:57:13.834567 1001113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:57:13.843269 1001113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:57:13.851586 1001113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:57:13.851647 1001113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:57:13.860533 1001113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:57:13.915639 1001113 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:57:13.915783 1001113 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:57:14.030660 1001113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:57:14.030816 1001113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:57:14.030971 1001113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:57:14.042857 1001113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:57:09.300921 1001288 main.go:141] libmachine: (calico-816069) waiting for IP...
	I0120 12:57:09.302123 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:09.302783 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:09.302899 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:09.302782 1002621 retry.go:31] will retry after 222.711075ms: waiting for domain to come up
	I0120 12:57:09.527481 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:09.528200 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:09.528233 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:09.528163 1002621 retry.go:31] will retry after 286.227655ms: waiting for domain to come up
	I0120 12:57:09.815812 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:09.816508 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:09.816615 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:09.816498 1002621 retry.go:31] will retry after 335.775141ms: waiting for domain to come up
	I0120 12:57:10.154353 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:10.154935 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:10.154969 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:10.154911 1002621 retry.go:31] will retry after 457.701362ms: waiting for domain to come up
	I0120 12:57:10.614635 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:10.615316 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:10.615364 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:10.615293 1002621 retry.go:31] will retry after 716.480511ms: waiting for domain to come up
	I0120 12:57:11.333244 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:11.333877 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:11.333910 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:11.333835 1002621 retry.go:31] will retry after 768.813466ms: waiting for domain to come up
	I0120 12:57:12.104704 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:12.105204 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:12.105240 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:12.105161 1002621 retry.go:31] will retry after 1.101642335s: waiting for domain to come up
	I0120 12:57:13.208896 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:13.209446 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:13.209480 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:13.209416 1002621 retry.go:31] will retry after 1.470213701s: waiting for domain to come up
	I0120 12:57:14.176181 1001113 out.go:235]   - Generating certificates and keys ...
	I0120 12:57:14.176297 1001113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:57:14.176427 1001113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:57:14.287978 1001113 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:57:14.369050 1001113 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:57:14.544734 1001113 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:57:14.748793 1001113 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:57:14.890160 1001113 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:57:14.890569 1001113 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-816069 localhost] and IPs [192.168.50.105 127.0.0.1 ::1]
	I0120 12:57:15.175850 1001113 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:57:15.176150 1001113 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-816069 localhost] and IPs [192.168.50.105 127.0.0.1 ::1]
	I0120 12:57:15.266031 1001113 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:57:15.461666 1001113 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:57:15.517328 1001113 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:57:15.517599 1001113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:57:15.632363 1001113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:57:15.867203 1001113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:57:16.015940 1001113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:57:16.100660 1001113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:57:16.207880 1001113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:57:16.208482 1001113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:57:16.210822 1001113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:57:16.305040 1001113 out.go:235]   - Booting up control plane ...
	I0120 12:57:16.305190 1001113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:57:16.305300 1001113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:57:16.305402 1001113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:57:16.305551 1001113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:57:16.305673 1001113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:57:16.305732 1001113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:57:16.381154 1001113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:57:16.381316 1001113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:57:16.882365 1001113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.635363ms
	I0120 12:57:16.882499 1001113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:57:14.682004 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:14.682447 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:14.682480 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:14.682432 1002621 retry.go:31] will retry after 1.622228298s: waiting for domain to come up
	I0120 12:57:16.306013 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:16.306708 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:16.306796 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:16.306695 1002621 retry.go:31] will retry after 1.697336267s: waiting for domain to come up
	I0120 12:57:18.005444 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:18.006030 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:18.006070 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:18.005987 1002621 retry.go:31] will retry after 2.494355452s: waiting for domain to come up
	I0120 12:57:21.881115 1001113 kubeadm.go:310] [api-check] The API server is healthy after 5.001857163s
	I0120 12:57:21.899993 1001113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:57:21.916522 1001113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:57:21.943880 1001113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:57:21.944108 1001113 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-816069 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:57:21.955610 1001113 kubeadm.go:310] [bootstrap-token] Using token: 7d7o4g.i5ko6o7f6c1zb192
	I0120 12:57:21.956981 1001113 out.go:235]   - Configuring RBAC rules ...
	I0120 12:57:21.957113 1001113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:57:21.964198 1001113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:57:21.974467 1001113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:57:21.978965 1001113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:57:21.987913 1001113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:57:21.990827 1001113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:57:22.292887 1001113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:57:22.722421 1001113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:57:23.288904 1001113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:57:23.288947 1001113 kubeadm.go:310] 
	I0120 12:57:23.289032 1001113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:57:23.289044 1001113 kubeadm.go:310] 
	I0120 12:57:23.289211 1001113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:57:23.289237 1001113 kubeadm.go:310] 
	I0120 12:57:23.289274 1001113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:57:23.289359 1001113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:57:23.289429 1001113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:57:23.289440 1001113 kubeadm.go:310] 
	I0120 12:57:23.289512 1001113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:57:23.289528 1001113 kubeadm.go:310] 
	I0120 12:57:23.289618 1001113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:57:23.289640 1001113 kubeadm.go:310] 
	I0120 12:57:23.289718 1001113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:57:23.289818 1001113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:57:23.289915 1001113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:57:23.289929 1001113 kubeadm.go:310] 
	I0120 12:57:23.290031 1001113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:57:23.290136 1001113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:57:23.290147 1001113 kubeadm.go:310] 
	I0120 12:57:23.290268 1001113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7d7o4g.i5ko6o7f6c1zb192 \
	I0120 12:57:23.290423 1001113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:57:23.290507 1001113 kubeadm.go:310] 	--control-plane 
	I0120 12:57:23.290534 1001113 kubeadm.go:310] 
	I0120 12:57:23.290647 1001113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:57:23.290660 1001113 kubeadm.go:310] 
	I0120 12:57:23.290789 1001113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7d7o4g.i5ko6o7f6c1zb192 \
	I0120 12:57:23.290931 1001113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:57:23.292425 1001113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:57:23.292477 1001113 cni.go:84] Creating CNI manager for "kindnet"
	I0120 12:57:23.294021 1001113 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0120 12:57:20.503471 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:20.503855 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:20.503875 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:20.503828 1002621 retry.go:31] will retry after 2.335129083s: waiting for domain to come up
	I0120 12:57:22.840617 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:22.841075 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:22.841106 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:22.841030 1002621 retry.go:31] will retry after 2.776559769s: waiting for domain to come up
	I0120 12:57:23.295168 1001113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 12:57:23.300532 1001113 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 12:57:23.300549 1001113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0120 12:57:23.316799 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 12:57:23.595101 1001113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:57:23.595163 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:23.595215 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-816069 minikube.k8s.io/updated_at=2025_01_20T12_57_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=kindnet-816069 minikube.k8s.io/primary=true
	I0120 12:57:23.780863 1001113 ops.go:34] apiserver oom_adj: -16
	I0120 12:57:23.781139 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:24.281317 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:24.781642 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:25.281523 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:25.781889 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:26.281237 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:26.781166 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:27.281443 1001113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:57:27.378930 1001113 kubeadm.go:1113] duration metric: took 3.783828291s to wait for elevateKubeSystemPrivileges
	I0120 12:57:27.378976 1001113 kubeadm.go:394] duration metric: took 13.650030981s to StartCluster
	I0120 12:57:27.379004 1001113 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:27.379109 1001113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:57:27.380111 1001113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:57:27.380359 1001113 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:57:27.380386 1001113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:57:27.380477 1001113 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:57:27.380571 1001113 config.go:182] Loaded profile config "kindnet-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:57:27.380588 1001113 addons.go:69] Setting storage-provisioner=true in profile "kindnet-816069"
	I0120 12:57:27.380605 1001113 addons.go:69] Setting default-storageclass=true in profile "kindnet-816069"
	I0120 12:57:27.380610 1001113 addons.go:238] Setting addon storage-provisioner=true in "kindnet-816069"
	I0120 12:57:27.380624 1001113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-816069"
	I0120 12:57:27.380650 1001113 host.go:66] Checking if "kindnet-816069" exists ...
	I0120 12:57:27.381121 1001113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:27.381124 1001113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:27.381157 1001113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:27.381211 1001113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:27.381992 1001113 out.go:177] * Verifying Kubernetes components...
	I0120 12:57:27.383241 1001113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:57:27.396668 1001113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0120 12:57:27.396740 1001113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0120 12:57:27.397353 1001113 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:27.397362 1001113 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:27.397926 1001113 main.go:141] libmachine: Using API Version  1
	I0120 12:57:27.397948 1001113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:27.397967 1001113 main.go:141] libmachine: Using API Version  1
	I0120 12:57:27.397986 1001113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:27.398275 1001113 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:27.398397 1001113 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:27.398471 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetState
	I0120 12:57:27.399020 1001113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:27.399055 1001113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:27.402264 1001113 addons.go:238] Setting addon default-storageclass=true in "kindnet-816069"
	I0120 12:57:27.402320 1001113 host.go:66] Checking if "kindnet-816069" exists ...
	I0120 12:57:27.402764 1001113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:27.402804 1001113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:27.414062 1001113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0120 12:57:27.414729 1001113 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:27.415392 1001113 main.go:141] libmachine: Using API Version  1
	I0120 12:57:27.415426 1001113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:27.415981 1001113 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:27.416232 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetState
	I0120 12:57:27.416949 1001113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0120 12:57:27.417735 1001113 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:27.418325 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:27.418362 1001113 main.go:141] libmachine: Using API Version  1
	I0120 12:57:27.418375 1001113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:27.418743 1001113 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:27.419222 1001113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:57:27.419252 1001113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:57:27.421649 1001113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:57:27.422956 1001113 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:57:27.422980 1001113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:57:27.423016 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:27.426381 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:27.426883 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:27.426916 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:27.427161 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:27.427322 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:27.427476 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:27.427636 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:27.434996 1001113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0120 12:57:27.435352 1001113 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:57:27.435758 1001113 main.go:141] libmachine: Using API Version  1
	I0120 12:57:27.435774 1001113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:57:27.436068 1001113 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:57:27.436265 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetState
	I0120 12:57:27.438062 1001113 main.go:141] libmachine: (kindnet-816069) Calling .DriverName
	I0120 12:57:27.438280 1001113 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:57:27.438296 1001113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:57:27.438308 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHHostname
	I0120 12:57:27.441377 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:27.441896 1001113 main.go:141] libmachine: (kindnet-816069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:57:d7", ip: ""} in network mk-kindnet-816069: {Iface:virbr2 ExpiryTime:2025-01-20 13:56:59 +0000 UTC Type:0 Mac:52:54:00:f5:57:d7 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:kindnet-816069 Clientid:01:52:54:00:f5:57:d7}
	I0120 12:57:27.441926 1001113 main.go:141] libmachine: (kindnet-816069) DBG | domain kindnet-816069 has defined IP address 192.168.50.105 and MAC address 52:54:00:f5:57:d7 in network mk-kindnet-816069
	I0120 12:57:27.442151 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHPort
	I0120 12:57:27.442329 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHKeyPath
	I0120 12:57:27.442489 1001113 main.go:141] libmachine: (kindnet-816069) Calling .GetSSHUsername
	I0120 12:57:27.442649 1001113 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/kindnet-816069/id_rsa Username:docker}
	I0120 12:57:27.580014 1001113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:57:27.629632 1001113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:57:27.732232 1001113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:57:27.841042 1001113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:57:27.843056 1001113 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0120 12:57:27.844358 1001113 node_ready.go:35] waiting up to 15m0s for node "kindnet-816069" to be "Ready" ...
	I0120 12:57:27.927693 1001113 main.go:141] libmachine: Making call to close driver server
	I0120 12:57:27.927721 1001113 main.go:141] libmachine: (kindnet-816069) Calling .Close
	I0120 12:57:27.928024 1001113 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:57:27.928049 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Closing plugin on server side
	I0120 12:57:27.928056 1001113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:57:27.928092 1001113 main.go:141] libmachine: Making call to close driver server
	I0120 12:57:27.928120 1001113 main.go:141] libmachine: (kindnet-816069) Calling .Close
	I0120 12:57:27.928390 1001113 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:57:27.928395 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Closing plugin on server side
	I0120 12:57:27.928405 1001113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:57:27.936368 1001113 main.go:141] libmachine: Making call to close driver server
	I0120 12:57:27.936392 1001113 main.go:141] libmachine: (kindnet-816069) Calling .Close
	I0120 12:57:27.936660 1001113 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:57:27.936676 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Closing plugin on server side
	I0120 12:57:27.936680 1001113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:57:28.211374 1001113 main.go:141] libmachine: Making call to close driver server
	I0120 12:57:28.211400 1001113 main.go:141] libmachine: (kindnet-816069) Calling .Close
	I0120 12:57:28.211711 1001113 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:57:28.211732 1001113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:57:28.211741 1001113 main.go:141] libmachine: Making call to close driver server
	I0120 12:57:28.211750 1001113 main.go:141] libmachine: (kindnet-816069) Calling .Close
	I0120 12:57:28.211755 1001113 main.go:141] libmachine: (kindnet-816069) DBG | Closing plugin on server side
	I0120 12:57:28.212030 1001113 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:57:28.212060 1001113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:57:28.213471 1001113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0120 12:57:25.619877 1001288 main.go:141] libmachine: (calico-816069) DBG | domain calico-816069 has defined MAC address 52:54:00:18:02:dd in network mk-calico-816069
	I0120 12:57:25.620265 1001288 main.go:141] libmachine: (calico-816069) DBG | unable to find current IP address of domain calico-816069 in network mk-calico-816069
	I0120 12:57:25.620304 1001288 main.go:141] libmachine: (calico-816069) DBG | I0120 12:57:25.620248 1002621 retry.go:31] will retry after 5.137239543s: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.186882436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377851186853429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1c25fcc-5e17-43b5-a5fc-104936162dba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.187402806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f2abba2-ebf6-4c57-969c-75f0d6f13c92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.187453394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f2abba2-ebf6-4c57-969c-75f0d6f13c92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.187703266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20,PodSandboxId:8deac22818485e9681cc0c45711c259b69058754f5fa4b6b8ff7ae1886799c4a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377817247876931,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sxf9j,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: befe7122-5e28-4328-8eb2-5e45c6ba3035,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646d6dda93cf3d6ff6586ec2febdae832246e68ad9b5df058fda94e232df2b64,PodSandboxId:593914b0b44a2891fcecd949a63fb465c9c7cc48fc78d0e185a8661c56dd6155,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376553011119460,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h6rh4,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 12da65fd-7ffd-485d-8fb0-b712f2ac02e7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d1df4c4102fc55714f53466875316e2a832e4ada1c2ce40ce05a51ffe1321b,PodSandboxId:0d4f467dcff0d31568778495d6eae7b7989496bd8d65fd6b55f82b3603e720af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376541281411547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77b12e8-25f3-43ad-8588-2716dd4ccbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179f1f311e59596840b12bcf1da606814e208c5277344fe4d753fa5825528dac,PodSandboxId:5fa2fef74a290a6b373799c16ae664e7edcb68f93d69b382236ce489028788ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540185573922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cn8tc,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a18120-8f3f-45bd-92f3-c291423f4895,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914aea69f900968af04fec60015d1482ac8709f4341636c20b769944ab0db546,PodSandboxId:730593d372498698d6b424987a63e888bff49ba49bd8d18b4aff01e26ca72364,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540090627670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g9m4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3e4568-92ab-4ee5-b10a-5489b72248d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b22c408d59e9871bdc35d63bf6e04a38ceb5414b9141e1a5733c8b997fd04cd,PodSandboxId:27871a969ebd306bcb8cf1b90dffa74be4b738ba8d720a6cc845676956415c58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376539124840721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn66t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90855a0-c87a-4b55-bd0e-4b95b062479d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e059a86f7798d824fb89b95a45d36f67451652d5f9c67388d7e42e7e62a7dea,PodSandboxId:ef87cdd1629e0aa99f97f95bec9ff649b4d00be7d0fa8e9507615c4efaadae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d
2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376528600112031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c41b2c07c144fc03d2ba99624e1926,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5270995abf8a0993d07ee060c0850faab1f12c6e5a2e302dc9b08b941ac3952,PodSandboxId:b946c2d4926f03ec5676d8568bd7879313daab9fa18b0b3fdaa19ab06a96bc08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageS
pec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376528557533792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc01c6ceec4d4e750a535432975e989,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935ac6b8c04d03c7b5ced95c1b3d80e43181276db09996c4c6ca25decdee96a9,PodSandboxId:efa575529a3a1255abfdb0f5f58468a77a77ed2e065def765b08a5b5f759fbd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9
e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376528530355420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b34dddede0de56cdc216d3e67be25,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e549026bf5b23dbbf7895f7bb536f27b84aed7911dd75882ecb3cfd42363f,PodSandboxId:c67111c41ccfd41394331da195cc8bf9423cb0d5a25b5474af74f40bf3358c1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376528507609320,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200a63a146ff4b1e33ada35e1b8ac583650e35f0b110756509c8165ea156265e,PodSandboxId:c2dc6a89a22f078e66b0761d20e847c44c7ada4dd4e3b475cacf2a303739f0c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376238369526040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f2abba2-ebf6-4c57-969c-75f0d6f13c92 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.222970400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecc4d2ea-0722-40df-9126-e4a6899d36c1 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.223030411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecc4d2ea-0722-40df-9126-e4a6899d36c1 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.224362145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2a034e5-82c9-4d31-9bcf-15b4c7448429 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.224782816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377851224763916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2a034e5-82c9-4d31-9bcf-15b4c7448429 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.225296387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dd66e31-c782-4499-9f39-c11beebaa701 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.225343573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dd66e31-c782-4499-9f39-c11beebaa701 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.225568361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20,PodSandboxId:8deac22818485e9681cc0c45711c259b69058754f5fa4b6b8ff7ae1886799c4a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377817247876931,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sxf9j,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: befe7122-5e28-4328-8eb2-5e45c6ba3035,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646d6dda93cf3d6ff6586ec2febdae832246e68ad9b5df058fda94e232df2b64,PodSandboxId:593914b0b44a2891fcecd949a63fb465c9c7cc48fc78d0e185a8661c56dd6155,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376553011119460,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h6rh4,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 12da65fd-7ffd-485d-8fb0-b712f2ac02e7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d1df4c4102fc55714f53466875316e2a832e4ada1c2ce40ce05a51ffe1321b,PodSandboxId:0d4f467dcff0d31568778495d6eae7b7989496bd8d65fd6b55f82b3603e720af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376541281411547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77b12e8-25f3-43ad-8588-2716dd4ccbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179f1f311e59596840b12bcf1da606814e208c5277344fe4d753fa5825528dac,PodSandboxId:5fa2fef74a290a6b373799c16ae664e7edcb68f93d69b382236ce489028788ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540185573922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cn8tc,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a18120-8f3f-45bd-92f3-c291423f4895,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914aea69f900968af04fec60015d1482ac8709f4341636c20b769944ab0db546,PodSandboxId:730593d372498698d6b424987a63e888bff49ba49bd8d18b4aff01e26ca72364,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540090627670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g9m4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3e4568-92ab-4ee5-b10a-5489b72248d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b22c408d59e9871bdc35d63bf6e04a38ceb5414b9141e1a5733c8b997fd04cd,PodSandboxId:27871a969ebd306bcb8cf1b90dffa74be4b738ba8d720a6cc845676956415c58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376539124840721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn66t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90855a0-c87a-4b55-bd0e-4b95b062479d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e059a86f7798d824fb89b95a45d36f67451652d5f9c67388d7e42e7e62a7dea,PodSandboxId:ef87cdd1629e0aa99f97f95bec9ff649b4d00be7d0fa8e9507615c4efaadae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d
2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376528600112031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c41b2c07c144fc03d2ba99624e1926,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5270995abf8a0993d07ee060c0850faab1f12c6e5a2e302dc9b08b941ac3952,PodSandboxId:b946c2d4926f03ec5676d8568bd7879313daab9fa18b0b3fdaa19ab06a96bc08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageS
pec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376528557533792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc01c6ceec4d4e750a535432975e989,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935ac6b8c04d03c7b5ced95c1b3d80e43181276db09996c4c6ca25decdee96a9,PodSandboxId:efa575529a3a1255abfdb0f5f58468a77a77ed2e065def765b08a5b5f759fbd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9
e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376528530355420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b34dddede0de56cdc216d3e67be25,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e549026bf5b23dbbf7895f7bb536f27b84aed7911dd75882ecb3cfd42363f,PodSandboxId:c67111c41ccfd41394331da195cc8bf9423cb0d5a25b5474af74f40bf3358c1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376528507609320,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200a63a146ff4b1e33ada35e1b8ac583650e35f0b110756509c8165ea156265e,PodSandboxId:c2dc6a89a22f078e66b0761d20e847c44c7ada4dd4e3b475cacf2a303739f0c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376238369526040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dd66e31-c782-4499-9f39-c11beebaa701 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.254819800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=875aa528-1ec7-4a9e-bb32-fad62cbfb10f name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.254907325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=875aa528-1ec7-4a9e-bb32-fad62cbfb10f name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.256187679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e8058e7-779d-4909-9aef-d2f6a3791ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.256749331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377851256654855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e8058e7-779d-4909-9aef-d2f6a3791ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.259384721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9678b93a-dea1-4c1c-b0a1-4c55e81e4a3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.259464650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9678b93a-dea1-4c1c-b0a1-4c55e81e4a3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.259793696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20,PodSandboxId:8deac22818485e9681cc0c45711c259b69058754f5fa4b6b8ff7ae1886799c4a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377817247876931,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sxf9j,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: befe7122-5e28-4328-8eb2-5e45c6ba3035,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646d6dda93cf3d6ff6586ec2febdae832246e68ad9b5df058fda94e232df2b64,PodSandboxId:593914b0b44a2891fcecd949a63fb465c9c7cc48fc78d0e185a8661c56dd6155,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376553011119460,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h6rh4,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 12da65fd-7ffd-485d-8fb0-b712f2ac02e7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d1df4c4102fc55714f53466875316e2a832e4ada1c2ce40ce05a51ffe1321b,PodSandboxId:0d4f467dcff0d31568778495d6eae7b7989496bd8d65fd6b55f82b3603e720af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376541281411547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77b12e8-25f3-43ad-8588-2716dd4ccbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179f1f311e59596840b12bcf1da606814e208c5277344fe4d753fa5825528dac,PodSandboxId:5fa2fef74a290a6b373799c16ae664e7edcb68f93d69b382236ce489028788ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540185573922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cn8tc,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a18120-8f3f-45bd-92f3-c291423f4895,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914aea69f900968af04fec60015d1482ac8709f4341636c20b769944ab0db546,PodSandboxId:730593d372498698d6b424987a63e888bff49ba49bd8d18b4aff01e26ca72364,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540090627670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g9m4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3e4568-92ab-4ee5-b10a-5489b72248d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b22c408d59e9871bdc35d63bf6e04a38ceb5414b9141e1a5733c8b997fd04cd,PodSandboxId:27871a969ebd306bcb8cf1b90dffa74be4b738ba8d720a6cc845676956415c58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376539124840721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn66t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90855a0-c87a-4b55-bd0e-4b95b062479d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e059a86f7798d824fb89b95a45d36f67451652d5f9c67388d7e42e7e62a7dea,PodSandboxId:ef87cdd1629e0aa99f97f95bec9ff649b4d00be7d0fa8e9507615c4efaadae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d
2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376528600112031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c41b2c07c144fc03d2ba99624e1926,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5270995abf8a0993d07ee060c0850faab1f12c6e5a2e302dc9b08b941ac3952,PodSandboxId:b946c2d4926f03ec5676d8568bd7879313daab9fa18b0b3fdaa19ab06a96bc08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageS
pec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376528557533792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc01c6ceec4d4e750a535432975e989,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935ac6b8c04d03c7b5ced95c1b3d80e43181276db09996c4c6ca25decdee96a9,PodSandboxId:efa575529a3a1255abfdb0f5f58468a77a77ed2e065def765b08a5b5f759fbd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9
e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376528530355420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b34dddede0de56cdc216d3e67be25,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e549026bf5b23dbbf7895f7bb536f27b84aed7911dd75882ecb3cfd42363f,PodSandboxId:c67111c41ccfd41394331da195cc8bf9423cb0d5a25b5474af74f40bf3358c1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376528507609320,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200a63a146ff4b1e33ada35e1b8ac583650e35f0b110756509c8165ea156265e,PodSandboxId:c2dc6a89a22f078e66b0761d20e847c44c7ada4dd4e3b475cacf2a303739f0c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376238369526040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9678b93a-dea1-4c1c-b0a1-4c55e81e4a3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.294992287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65ddf037-7d3c-4a1f-9127-42b85cb5b320 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.295077200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65ddf037-7d3c-4a1f-9127-42b85cb5b320 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.296042735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e85d16eb-9407-4b2b-940d-ad70dc4b81bf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.296500274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377851296483051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e85d16eb-9407-4b2b-940d-ad70dc4b81bf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.296886659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa91a9ad-1da9-48ae-b886-a7c74e816665 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.296948445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa91a9ad-1da9-48ae-b886-a7c74e816665 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:57:31 default-k8s-diff-port-981597 crio[732]: time="2025-01-20 12:57:31.297164634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20,PodSandboxId:8deac22818485e9681cc0c45711c259b69058754f5fa4b6b8ff7ae1886799c4a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737377817247876931,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-sxf9j,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: befe7122-5e28-4328-8eb2-5e45c6ba3035,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:646d6dda93cf3d6ff6586ec2febdae832246e68ad9b5df058fda94e232df2b64,PodSandboxId:593914b0b44a2891fcecd949a63fb465c9c7cc48fc78d0e185a8661c56dd6155,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737376553011119460,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h6rh4,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 12da65fd-7ffd-485d-8fb0-b712f2ac02e7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d1df4c4102fc55714f53466875316e2a832e4ada1c2ce40ce05a51ffe1321b,PodSandboxId:0d4f467dcff0d31568778495d6eae7b7989496bd8d65fd6b55f82b3603e720af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737376541281411547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77b12e8-25f3-43ad-8588-2716dd4ccbd1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179f1f311e59596840b12bcf1da606814e208c5277344fe4d753fa5825528dac,PodSandboxId:5fa2fef74a290a6b373799c16ae664e7edcb68f93d69b382236ce489028788ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540185573922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cn8tc,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a18120-8f3f-45bd-92f3-c291423f4895,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914aea69f900968af04fec60015d1482ac8709f4341636c20b769944ab0db546,PodSandboxId:730593d372498698d6b424987a63e888bff49ba49bd8d18b4aff01e26ca72364,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737376540090627670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g9m4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e3e4568-92ab-4ee5-b10a-5489b72248d6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b22c408d59e9871bdc35d63bf6e04a38ceb5414b9141e1a5733c8b997fd04cd,PodSandboxId:27871a969ebd306bcb8cf1b90dffa74be4b738ba8d720a6cc845676956415c58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737376539124840721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sn66t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90855a0-c87a-4b55-bd0e-4b95b062479d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e059a86f7798d824fb89b95a45d36f67451652d5f9c67388d7e42e7e62a7dea,PodSandboxId:ef87cdd1629e0aa99f97f95bec9ff649b4d00be7d0fa8e9507615c4efaadae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d
2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737376528600112031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4c41b2c07c144fc03d2ba99624e1926,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5270995abf8a0993d07ee060c0850faab1f12c6e5a2e302dc9b08b941ac3952,PodSandboxId:b946c2d4926f03ec5676d8568bd7879313daab9fa18b0b3fdaa19ab06a96bc08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageS
pec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737376528557533792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc01c6ceec4d4e750a535432975e989,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935ac6b8c04d03c7b5ced95c1b3d80e43181276db09996c4c6ca25decdee96a9,PodSandboxId:efa575529a3a1255abfdb0f5f58468a77a77ed2e065def765b08a5b5f759fbd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9
e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737376528530355420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b34dddede0de56cdc216d3e67be25,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e549026bf5b23dbbf7895f7bb536f27b84aed7911dd75882ecb3cfd42363f,PodSandboxId:c67111c41ccfd41394331da195cc8bf9423cb0d5a25b5474af74f40bf3358c1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737376528507609320,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200a63a146ff4b1e33ada35e1b8ac583650e35f0b110756509c8165ea156265e,PodSandboxId:c2dc6a89a22f078e66b0761d20e847c44c7ada4dd4e3b475cacf2a303739f0c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737376238369526040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-981597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a1002b6dbe0c39c0dbf38a9e405affc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa91a9ad-1da9-48ae-b886-a7c74e816665 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bd591b12a9670       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago      Exited              dashboard-metrics-scraper   9                   8deac22818485       dashboard-metrics-scraper-86c6bf9756-sxf9j
	646d6dda93cf3       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   593914b0b44a2       kubernetes-dashboard-7779f9b69b-h6rh4
	77d1df4c4102f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   0d4f467dcff0d       storage-provisioner
	179f1f311e595       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   5fa2fef74a290       coredns-668d6bf9bc-cn8tc
	914aea69f9009       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   730593d372498       coredns-668d6bf9bc-g9m4p
	8b22c408d59e9       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           21 minutes ago      Running             kube-proxy                  0                   27871a969ebd3       kube-proxy-sn66t
	3e059a86f7798       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           22 minutes ago      Running             kube-controller-manager     2                   ef87cdd1629e0       kube-controller-manager-default-k8s-diff-port-981597
	b5270995abf8a       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           22 minutes ago      Running             kube-scheduler              2                   b946c2d4926f0       kube-scheduler-default-k8s-diff-port-981597
	935ac6b8c04d0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           22 minutes ago      Running             etcd                        2                   efa575529a3a1       etcd-default-k8s-diff-port-981597
	7f1e549026bf5       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           22 minutes ago      Running             kube-apiserver              2                   c67111c41ccfd       kube-apiserver-default-k8s-diff-port-981597
	200a63a146ff4       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           26 minutes ago      Exited              kube-apiserver              1                   c2dc6a89a22f0       kube-apiserver-default-k8s-diff-port-981597
	
	
	==> coredns [179f1f311e59596840b12bcf1da606814e208c5277344fe4d753fa5825528dac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [914aea69f900968af04fec60015d1482ac8709f4341636c20b769944ab0db546] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-981597
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-981597
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=default-k8s-diff-port-981597
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_35_34_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-981597
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:57:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:52:54 +0000   Mon, 20 Jan 2025 12:35:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:52:54 +0000   Mon, 20 Jan 2025 12:35:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:52:54 +0000   Mon, 20 Jan 2025 12:35:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:52:54 +0000   Mon, 20 Jan 2025 12:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    default-k8s-diff-port-981597
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd2ea5e54d5d4acaa41657be60c35849
	  System UUID:                bd2ea5e5-4d5d-4aca-a416-57be60c35849
	  Boot ID:                    9ff6c5c0-2b27-44a4-8151-568ff0591e22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-cn8tc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-g9m4p                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-981597                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-981597             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-981597    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-sn66t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-981597             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-xkrxx                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-sxf9j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-h6rh4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-981597 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-981597 event: Registered Node default-k8s-diff-port-981597 in Controller
	
	
	==> dmesg <==
	[  +0.039478] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.976167] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.103430] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.596722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.807270] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.059528] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052256] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.165206] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.124984] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +0.252254] systemd-fstab-generator[722]: Ignoring "noauto" option for root device
	[  +4.007912] systemd-fstab-generator[815]: Ignoring "noauto" option for root device
	[  +2.152590] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.059676] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.492730] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.826490] kauditd_printk_skb: 90 callbacks suppressed
	[Jan20 12:35] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.400627] systemd-fstab-generator[2713]: Ignoring "noauto" option for root device
	[  +4.581057] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.970935] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.106112] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.369570] systemd-fstab-generator[3245]: Ignoring "noauto" option for root device
	[  +7.168617] kauditd_printk_skb: 112 callbacks suppressed
	[  +6.472677] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [935ac6b8c04d03c7b5ced95c1b3d80e43181276db09996c4c6ca25decdee96a9] <==
	{"level":"info","ts":"2025-01-20T12:56:28.812128Z","caller":"traceutil/trace.go:171","msg":"trace[1385008906] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"530.26897ms","start":"2025-01-20T12:56:28.281823Z","end":"2025-01-20T12:56:28.812092Z","steps":["trace[1385008906] 'process raft request'  (duration: 530.12092ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:28.812566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:56:28.281809Z","time spent":"530.58258ms","remote":"127.0.0.1:53054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1676 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-20T12:56:29.337631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.287345ms","expected-duration":"100ms","prefix":"","request":"header:<ID:694280585983147730 > lease_revoke:<id:09a29483b5f9be39>","response":"size:28"}
	{"level":"info","ts":"2025-01-20T12:56:29.338028Z","caller":"traceutil/trace.go:171","msg":"trace[545116101] linearizableReadLoop","detail":"{readStateIndex:1947; appliedIndex:1946; }","duration":"1.017846223s","start":"2025-01-20T12:56:28.320137Z","end":"2025-01-20T12:56:29.337984Z","steps":["trace[545116101] 'read index received'  (duration: 493.011938ms)","trace[545116101] 'applied index is now lower than readState.Index'  (duration: 524.832978ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:56:29.338158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.018002432s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.338654Z","caller":"traceutil/trace.go:171","msg":"trace[47762822] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1677; }","duration":"1.018508889s","start":"2025-01-20T12:56:28.320133Z","end":"2025-01-20T12:56:29.338641Z","steps":["trace[47762822] 'agreement among raft nodes before linearized reading'  (duration: 1.017983468s)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.339112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"929.929961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.339294Z","caller":"traceutil/trace.go:171","msg":"trace[906645785] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1678; }","duration":"930.137776ms","start":"2025-01-20T12:56:28.409148Z","end":"2025-01-20T12:56:29.339286Z","steps":["trace[906645785] 'agreement among raft nodes before linearized reading'  (duration: 929.933406ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.339413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:56:28.409098Z","time spent":"930.3041ms","remote":"127.0.0.1:53078","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-20T12:56:29.338362Z","caller":"traceutil/trace.go:171","msg":"trace[1449875709] transaction","detail":"{read_only:false; response_revision:1678; number_of_response:1; }","duration":"146.283666ms","start":"2025-01-20T12:56:29.192070Z","end":"2025-01-20T12:56:29.338353Z","steps":["trace[1449875709] 'process raft request'  (duration: 146.193251ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.339631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.456622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.339670Z","caller":"traceutil/trace.go:171","msg":"trace[1930435806] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1678; }","duration":"176.533775ms","start":"2025-01-20T12:56:29.163131Z","end":"2025-01-20T12:56:29.339664Z","steps":["trace[1930435806] 'agreement among raft nodes before linearized reading'  (duration: 176.483292ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.339797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"821.377579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.339840Z","caller":"traceutil/trace.go:171","msg":"trace[21232887] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1678; }","duration":"821.446823ms","start":"2025-01-20T12:56:28.518382Z","end":"2025-01-20T12:56:29.339828Z","steps":["trace[21232887] 'agreement among raft nodes before linearized reading'  (duration: 821.392543ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.339886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:56:28.518364Z","time spent":"821.517216ms","remote":"127.0.0.1:52882","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-01-20T12:56:29.743269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.136916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-20T12:56:29.743331Z","caller":"traceutil/trace.go:171","msg":"trace[1411402000] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1678; }","duration":"169.212211ms","start":"2025-01-20T12:56:29.574102Z","end":"2025-01-20T12:56:29.743315Z","steps":["trace[1411402000] 'count revisions from in-memory index tree'  (duration: 169.000864ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:56:29.743392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.422753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:56:29.743443Z","caller":"traceutil/trace.go:171","msg":"trace[1264314061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1678; }","duration":"134.596797ms","start":"2025-01-20T12:56:29.608835Z","end":"2025-01-20T12:56:29.743432Z","steps":["trace[1264314061] 'range keys from in-memory index tree'  (duration: 134.356202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:57:15.291387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.880059ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:57:15.291658Z","caller":"traceutil/trace.go:171","msg":"trace[1704718465] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1719; }","duration":"153.185428ms","start":"2025-01-20T12:57:15.138455Z","end":"2025-01-20T12:57:15.291641Z","steps":["trace[1704718465] 'range keys from in-memory index tree'  (duration: 152.867721ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:57:15.291812Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.712723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:57:15.291877Z","caller":"traceutil/trace.go:171","msg":"trace[704518391] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1719; }","duration":"129.80218ms","start":"2025-01-20T12:57:15.162061Z","end":"2025-01-20T12:57:15.291863Z","steps":["trace[704518391] 'range keys from in-memory index tree'  (duration: 129.642035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:57:16.120112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.910208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:57:16.120190Z","caller":"traceutil/trace.go:171","msg":"trace[945225824] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1720; }","duration":"116.027955ms","start":"2025-01-20T12:57:16.004149Z","end":"2025-01-20T12:57:16.120177Z","steps":["trace[945225824] 'range keys from in-memory index tree'  (duration: 115.779839ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:57:31 up 27 min,  0 users,  load average: 0.18, 0.19, 0.18
	Linux default-k8s-diff-port-981597 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [200a63a146ff4b1e33ada35e1b8ac583650e35f0b110756509c8165ea156265e] <==
	W0120 12:35:19.248800       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:19.353573       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:22.865985       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:22.967388       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:22.994668       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.190541       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.590808       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.635892       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.648043       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.650535       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.796952       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.864616       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.881631       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.892174       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.893521       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.897986       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.969059       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.976543       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:23.996690       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.041539       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.065438       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.095004       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.170588       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.224935       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 12:35:24.270737       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7f1e549026bf5b23dbbf7895f7bb536f27b84aed7911dd75882ecb3cfd42363f] <==
	I0120 12:53:32.400717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:53:32.400773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:55:31.398326       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:55:31.398738       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 12:55:32.400469       1 handler_proxy.go:99] no RequestInfo found in the context
	W0120 12:55:32.400497       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:55:32.400732       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0120 12:55:32.400746       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:55:32.401940       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:55:32.401993       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 12:56:32.402775       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:56:32.402867       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 12:56:32.403063       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 12:56:32.403297       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 12:56:32.404042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:56:32.405182       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e059a86f7798d824fb89b95a45d36f67451652d5f9c67388d7e42e7e62a7dea] <==
	E0120 12:52:38.250696       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:52:38.282638       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:52:54.529192       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-981597"
	E0120 12:53:08.258273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:08.291994       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:53:38.264360       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:53:38.298584       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:08.270532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:08.304777       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:54:38.278201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:54:38.313454       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:55:08.285462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:08.321338       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:55:38.293944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:55:38.333287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:56:08.300499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:56:08.340349       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 12:56:38.309148       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:56:38.349669       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 12:56:48.252626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="316.698µs"
	I0120 12:56:58.239713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="142.842µs"
	I0120 12:57:01.243513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="153.411µs"
	I0120 12:57:05.109068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="80.47µs"
	E0120 12:57:08.315806       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 12:57:08.358477       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8b22c408d59e9871bdc35d63bf6e04a38ceb5414b9141e1a5733c8b997fd04cd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:35:39.472175       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:35:39.509776       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	E0120 12:35:39.509859       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:35:39.603805       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:35:39.603832       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:35:39.603853       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:35:39.607097       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:35:39.607541       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:35:39.607552       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:35:39.611866       1 config.go:199] "Starting service config controller"
	I0120 12:35:39.615049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:35:39.615125       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:35:39.615131       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:35:39.619611       1 config.go:329] "Starting node config controller"
	I0120 12:35:39.619621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:35:39.715516       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 12:35:39.715554       1 shared_informer.go:320] Caches are synced for service config
	I0120 12:35:39.719914       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b5270995abf8a0993d07ee060c0850faab1f12c6e5a2e302dc9b08b941ac3952] <==
	W0120 12:35:32.271126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 12:35:32.271320       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.275788       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:35:32.275871       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.283300       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 12:35:32.283444       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.288673       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:35:32.288735       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.335138       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:35:32.335300       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.463487       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 12:35:32.463536       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.474698       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:35:32.474739       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.480442       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:35:32.480496       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.492113       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 12:35:32.492176       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.614829       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:35:32.614881       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.688891       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 12:35:32.688975       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:35:32.974945       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:35:32.975079       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0120 12:35:35.214732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:56:35 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:35.249189    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xkrxx" podUID="cf78f231-b1e0-4566-817b-bfb9b8dac3f6"
	Jan 20 12:56:44 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:44.538044    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377804537678107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:44 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:44.538106    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377804537678107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:46 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:56:46.229200    3061 scope.go:117] "RemoveContainer" containerID="a0f1e59786a3719dcc312363833117bd1b2fdb6d8a63f77d6a9bf064a3cb0810"
	Jan 20 12:56:46 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:46.229508    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sxf9j_kubernetes-dashboard(befe7122-5e28-4328-8eb2-5e45c6ba3035)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sxf9j" podUID="befe7122-5e28-4328-8eb2-5e45c6ba3035"
	Jan 20 12:56:48 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:48.234475    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xkrxx" podUID="cf78f231-b1e0-4566-817b-bfb9b8dac3f6"
	Jan 20 12:56:54 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:54.539703    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377814539319883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:54 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:54.539774    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377814539319883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:56:57 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:56:57.229545    3061 scope.go:117] "RemoveContainer" containerID="a0f1e59786a3719dcc312363833117bd1b2fdb6d8a63f77d6a9bf064a3cb0810"
	Jan 20 12:56:58 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:56:58.215010    3061 scope.go:117] "RemoveContainer" containerID="a0f1e59786a3719dcc312363833117bd1b2fdb6d8a63f77d6a9bf064a3cb0810"
	Jan 20 12:56:58 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:56:58.216518    3061 scope.go:117] "RemoveContainer" containerID="bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20"
	Jan 20 12:56:58 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:56:58.219419    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sxf9j_kubernetes-dashboard(befe7122-5e28-4328-8eb2-5e45c6ba3035)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sxf9j" podUID="befe7122-5e28-4328-8eb2-5e45c6ba3035"
	Jan 20 12:57:01 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:01.230943    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xkrxx" podUID="cf78f231-b1e0-4566-817b-bfb9b8dac3f6"
	Jan 20 12:57:04 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:04.541830    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377824541371959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:04 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:04.542133    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377824541371959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:05 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:57:05.095736    3061 scope.go:117] "RemoveContainer" containerID="bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20"
	Jan 20 12:57:05 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:05.096045    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sxf9j_kubernetes-dashboard(befe7122-5e28-4328-8eb2-5e45c6ba3035)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sxf9j" podUID="befe7122-5e28-4328-8eb2-5e45c6ba3035"
	Jan 20 12:57:14 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:14.544422    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377834544048045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:14 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:14.544867    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377834544048045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:15 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:15.230724    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xkrxx" podUID="cf78f231-b1e0-4566-817b-bfb9b8dac3f6"
	Jan 20 12:57:20 default-k8s-diff-port-981597 kubelet[3061]: I0120 12:57:20.230341    3061 scope.go:117] "RemoveContainer" containerID="bd591b12a967033ea3c50ab6b671e6aa184f860dd98aa6cd4fbeb0c5e1978f20"
	Jan 20 12:57:20 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:20.231032    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-sxf9j_kubernetes-dashboard(befe7122-5e28-4328-8eb2-5e45c6ba3035)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-sxf9j" podUID="befe7122-5e28-4328-8eb2-5e45c6ba3035"
	Jan 20 12:57:24 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:24.547357    3061 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377844546746638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:24 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:24.547402    3061 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377844546746638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:57:27 default-k8s-diff-port-981597 kubelet[3061]: E0120 12:57:27.234682    3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xkrxx" podUID="cf78f231-b1e0-4566-817b-bfb9b8dac3f6"
	
	
	==> kubernetes-dashboard [646d6dda93cf3d6ff6586ec2febdae832246e68ad9b5df058fda94e232df2b64] <==
	2025/01/20 12:45:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:45:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:46:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:47:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:48:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:49:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:50:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:51:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:53:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:54:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:55:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:56:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:56:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:57:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [77d1df4c4102fc55714f53466875316e2a832e4ada1c2ce40ce05a51ffe1321b] <==
	I0120 12:35:41.497195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:35:41.511531       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:35:41.511578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:35:41.519621       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:35:41.520172       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca7e28f9-cf3a-418f-9b6e-c4866604487c", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-981597_a19f8a72-2577-44db-a620-17ef1dcf8f1d became leader
	I0120 12:35:41.520399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-981597_a19f8a72-2577-44db-a620-17ef1dcf8f1d!
	I0120 12:35:41.620874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-981597_a19f8a72-2577-44db-a620-17ef1dcf8f1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-981597 -n default-k8s-diff-port-981597
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-981597 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-xkrxx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-981597 describe pod metrics-server-f79f97bbb-xkrxx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-981597 describe pod metrics-server-f79f97bbb-xkrxx: exit status 1 (64.503072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-xkrxx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-981597 describe pod metrics-server-f79f97bbb-xkrxx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1643.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (513.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0120 12:32:41.308493  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:34:37.399891  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:00.483276  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:41.308633  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:39:37.399849  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m30.634703381s)

                                                
                                                
-- stdout --
	* [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-134433" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:31:11.956010  993585 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:31:11.956137  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956148  993585 out.go:358] Setting ErrFile to fd 2...
	I0120 12:31:11.956152  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956366  993585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:31:11.956993  993585 out.go:352] Setting JSON to false
	I0120 12:31:11.958067  993585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18815,"bootTime":1737357457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:31:11.958186  993585 start.go:139] virtualization: kvm guest
	I0120 12:31:11.960398  993585 out.go:177] * [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:31:11.961613  993585 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:31:11.961713  993585 notify.go:220] Checking for updates...
	I0120 12:31:11.964011  993585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:31:11.965092  993585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:11.966144  993585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:31:11.967208  993585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:31:11.968350  993585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:31:11.969863  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:11.970277  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.970346  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:11.985419  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0120 12:31:11.985879  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:11.986551  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:11.986596  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:11.986957  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:11.987146  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:11.988784  993585 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:31:11.989825  993585 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:31:11.990150  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.990189  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.004831  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0120 12:31:12.005226  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.005709  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.005734  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.006077  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.006313  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.043016  993585 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:31:12.044104  993585 start.go:297] selected driver: kvm2
	I0120 12:31:12.044121  993585 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
34433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.044209  993585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:31:12.044916  993585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.045000  993585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:31:12.060200  993585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:31:12.060534  993585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:31:12.060567  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:12.060601  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:12.060657  993585 start.go:340] cluster config:
	{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.060783  993585 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.062963  993585 out.go:177] * Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	I0120 12:31:12.064143  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:12.064184  993585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:31:12.064195  993585 cache.go:56] Caching tarball of preloaded images
	I0120 12:31:12.064275  993585 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:31:12.064287  993585 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:31:12.064378  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:12.064565  993585 start.go:360] acquireMachinesLock for old-k8s-version-134433: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:31:12.064608  993585 start.go:364] duration metric: took 25.197µs to acquireMachinesLock for "old-k8s-version-134433"
	I0120 12:31:12.064624  993585 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:31:12.064632  993585 fix.go:54] fixHost starting: 
	I0120 12:31:12.064897  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:12.064947  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.079979  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0120 12:31:12.080385  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.080944  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.080969  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.081279  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.081512  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.081673  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetState
	I0120 12:31:12.083222  993585 fix.go:112] recreateIfNeeded on old-k8s-version-134433: state=Stopped err=<nil>
	I0120 12:31:12.083247  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	W0120 12:31:12.083395  993585 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:31:12.084950  993585 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-134433" ...
	I0120 12:31:12.086040  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .Start
	I0120 12:31:12.086250  993585 main.go:141] libmachine: (old-k8s-version-134433) starting domain...
	I0120 12:31:12.086274  993585 main.go:141] libmachine: (old-k8s-version-134433) ensuring networks are active...
	I0120 12:31:12.087116  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network default is active
	I0120 12:31:12.087507  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network mk-old-k8s-version-134433 is active
	I0120 12:31:12.087972  993585 main.go:141] libmachine: (old-k8s-version-134433) getting domain XML...
	I0120 12:31:12.088701  993585 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:31:13.353235  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for IP...
	I0120 12:31:13.354008  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.354424  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.354568  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.354436  993621 retry.go:31] will retry after 195.738853ms: waiting for domain to come up
	I0120 12:31:13.551979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.552485  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.552546  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.552470  993621 retry.go:31] will retry after 286.807934ms: waiting for domain to come up
	I0120 12:31:13.841028  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.841561  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.841601  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.841522  993621 retry.go:31] will retry after 438.177816ms: waiting for domain to come up
	I0120 12:31:14.280867  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.281254  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.281287  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.281212  993621 retry.go:31] will retry after 401.413585ms: waiting for domain to come up
	I0120 12:31:14.684677  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.685256  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.685288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.685176  993621 retry.go:31] will retry after 625.770313ms: waiting for domain to come up
	I0120 12:31:15.312721  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:15.313245  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:15.313281  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:15.313210  993621 retry.go:31] will retry after 842.789855ms: waiting for domain to come up
	I0120 12:31:16.157329  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:16.157939  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:16.157970  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:16.157917  993621 retry.go:31] will retry after 997.649049ms: waiting for domain to come up
	I0120 12:31:17.157668  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:17.158288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:17.158346  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:17.158266  993621 retry.go:31] will retry after 1.3317802s: waiting for domain to come up
	I0120 12:31:18.491767  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:18.492314  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:18.492345  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:18.492274  993621 retry.go:31] will retry after 1.684115629s: waiting for domain to come up
	I0120 12:31:20.177742  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:20.178312  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:20.178344  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:20.178272  993621 retry.go:31] will retry after 2.098717757s: waiting for domain to come up
	I0120 12:31:22.279263  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:22.279782  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:22.279815  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:22.279747  993621 retry.go:31] will retry after 2.908067158s: waiting for domain to come up
	I0120 12:31:25.191591  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:25.192058  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:25.192082  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:25.192027  993621 retry.go:31] will retry after 2.860704715s: waiting for domain to come up
	I0120 12:31:28.053824  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:28.054209  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:28.054237  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:28.054168  993621 retry.go:31] will retry after 3.593877393s: waiting for domain to come up
	I0120 12:31:31.651977  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652456  993585 main.go:141] libmachine: (old-k8s-version-134433) found domain IP: 192.168.50.250
	I0120 12:31:31.652477  993585 main.go:141] libmachine: (old-k8s-version-134433) reserving static IP address...
	I0120 12:31:31.652499  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has current primary IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652880  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.652910  993585 main.go:141] libmachine: (old-k8s-version-134433) reserved static IP address 192.168.50.250 for domain old-k8s-version-134433
	I0120 12:31:31.652928  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | skip adding static IP to network mk-old-k8s-version-134433 - found existing host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"}
	I0120 12:31:31.652949  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for SSH...
	I0120 12:31:31.652979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Getting to WaitForSSH function...
	I0120 12:31:31.655045  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655323  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.655341  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655472  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH client type: external
	I0120 12:31:31.655509  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa (-rw-------)
	I0120 12:31:31.655555  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:31:31.655574  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | About to run SSH command:
	I0120 12:31:31.655599  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | exit 0
	I0120 12:31:31.778333  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | SSH cmd err, output: <nil>: 
	I0120 12:31:31.778766  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:31:31.779451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:31.782111  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782481  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.782538  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782728  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:31.782983  993585 machine.go:93] provisionDockerMachine start ...
	I0120 12:31:31.783008  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:31.783221  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.785482  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785771  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.785804  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785958  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.786153  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786352  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786496  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.786666  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.786905  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.786918  993585 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:31:31.886822  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:31:31.886860  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887127  993585 buildroot.go:166] provisioning hostname "old-k8s-version-134433"
	I0120 12:31:31.887156  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887366  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.890506  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.890962  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.891053  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.891155  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.891355  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891522  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.891900  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.892067  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.892078  993585 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-134433 && echo "old-k8s-version-134433" | sudo tee /etc/hostname
	I0120 12:31:32.007463  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-134433
	
	I0120 12:31:32.007490  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.010730  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011157  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.011184  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011407  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.011597  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011774  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011883  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.012032  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.012246  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.012275  993585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-134433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-134433/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-134433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:31:32.122811  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:31:32.122845  993585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:31:32.122865  993585 buildroot.go:174] setting up certificates
	I0120 12:31:32.122875  993585 provision.go:84] configureAuth start
	I0120 12:31:32.122884  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:32.123125  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.125986  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126423  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.126446  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126677  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.128626  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129281  993585 provision.go:143] copyHostCerts
	I0120 12:31:32.129354  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:31:32.129380  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:31:32.129382  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.129411  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129470  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:31:32.129581  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:31:32.129592  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:31:32.129634  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:31:32.129702  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:31:32.129712  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:31:32.129741  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:31:32.129806  993585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-134433 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-134433]
	I0120 12:31:32.226358  993585 provision.go:177] copyRemoteCerts
	I0120 12:31:32.226410  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:31:32.226432  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.228814  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229133  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.229168  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229333  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.229548  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.229722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.229881  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.315787  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:31:32.341389  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:31:32.364095  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:31:32.386543  993585 provision.go:87] duration metric: took 263.65519ms to configureAuth
	I0120 12:31:32.386572  993585 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:31:32.386750  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:32.386844  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.389737  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390222  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.390257  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390478  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.390683  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.390858  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.391063  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.391234  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.391417  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.391438  993585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:31:32.617034  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:31:32.617072  993585 machine.go:96] duration metric: took 834.071068ms to provisionDockerMachine
	I0120 12:31:32.617085  993585 start.go:293] postStartSetup for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:31:32.617096  993585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:31:32.617121  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.617506  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:31:32.617547  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.620838  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621275  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.621310  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621640  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.621865  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.622064  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.622248  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.703904  993585 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:31:32.707878  993585 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:31:32.707902  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:31:32.707970  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:31:32.708078  993585 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:31:32.708218  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:31:32.716746  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:32.739636  993585 start.go:296] duration metric: took 122.539492ms for postStartSetup
	I0120 12:31:32.739674  993585 fix.go:56] duration metric: took 20.675041615s for fixHost
	I0120 12:31:32.739700  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.742857  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743259  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.743291  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.743616  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743807  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743953  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.744112  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.744267  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.744277  993585 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:31:32.850613  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376292.825194263
	
	I0120 12:31:32.850655  993585 fix.go:216] guest clock: 1737376292.825194263
	I0120 12:31:32.850667  993585 fix.go:229] Guest: 2025-01-20 12:31:32.825194263 +0000 UTC Remote: 2025-01-20 12:31:32.739679914 +0000 UTC m=+20.823511960 (delta=85.514349ms)
	I0120 12:31:32.850692  993585 fix.go:200] guest clock delta is within tolerance: 85.514349ms
	I0120 12:31:32.850697  993585 start.go:83] releasing machines lock for "old-k8s-version-134433", held for 20.786078788s
	I0120 12:31:32.850723  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.850994  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.853508  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.853864  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.853895  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.854081  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854574  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854785  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854878  993585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:31:32.854915  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.855040  993585 ssh_runner.go:195] Run: cat /version.json
	I0120 12:31:32.855073  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.857825  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858071  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858242  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858273  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858472  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858613  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858642  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858678  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.858803  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858907  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.858970  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.859042  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.859089  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.859218  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.963636  993585 ssh_runner.go:195] Run: systemctl --version
	I0120 12:31:32.969637  993585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:31:33.109368  993585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:31:33.116476  993585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:31:33.116551  993585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:31:33.132563  993585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:31:33.132586  993585 start.go:495] detecting cgroup driver to use...
	I0120 12:31:33.132666  993585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:31:33.149598  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:31:33.163579  993585 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:31:33.163644  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:31:33.176714  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:31:33.190002  993585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:31:33.317215  993585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:31:33.474712  993585 docker.go:233] disabling docker service ...
	I0120 12:31:33.474786  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:31:33.487733  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:31:33.500315  993585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:31:33.629138  993585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:31:33.765704  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:31:33.780662  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:31:33.799085  993585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:31:33.799155  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.808607  993585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:31:33.808659  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.818065  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.827515  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.837226  993585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:31:33.846616  993585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:31:33.855024  993585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:31:33.855077  993585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:31:33.867670  993585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:31:33.876402  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:34.006664  993585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:31:34.098750  993585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:31:34.098834  993585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:31:34.103642  993585 start.go:563] Will wait 60s for crictl version
	I0120 12:31:34.103699  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:34.107125  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:31:34.144190  993585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:31:34.144288  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.172817  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.203224  993585 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:31:34.204485  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:34.207458  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.207876  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:34.207904  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.208137  993585 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:31:34.211891  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:34.223705  993585 kubeadm.go:883] updating cluster {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:31:34.223826  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:34.223864  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:34.268289  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:34.268365  993585 ssh_runner.go:195] Run: which lz4
	I0120 12:31:34.272014  993585 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:31:34.275957  993585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:31:34.275987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:31:35.756157  993585 crio.go:462] duration metric: took 1.484200004s to copy over tarball
	I0120 12:31:35.756230  993585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:31:38.594323  993585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838057752s)
	I0120 12:31:38.594429  993585 crio.go:469] duration metric: took 2.838184511s to extract the tarball
	I0120 12:31:38.594454  993585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:31:38.636288  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:38.673987  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:38.674016  993585 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:31:38.674097  993585 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.674135  993585 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:31:38.674145  993585 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.674178  993585 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.674112  993585 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.674208  993585 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.674120  993585 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.674479  993585 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675856  993585 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.675888  993585 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.675858  993585 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.675860  993585 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.891668  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:31:38.898693  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.901324  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.903830  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.907827  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.909691  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.911977  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.988279  993585 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:31:38.988332  993585 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:31:38.988388  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.039162  993585 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:31:39.039204  993585 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.039255  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.070879  993585 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:31:39.070922  993585 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.070974  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078869  993585 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:31:39.078897  993585 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:31:39.078910  993585 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.078930  993585 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.078948  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078955  993585 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:31:39.078982  993585 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.078982  993585 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:31:39.079004  993585 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.079014  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078986  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079039  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079028  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.079059  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.081555  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.083015  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.130647  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.130694  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.186867  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.186961  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.186966  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.209991  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.210008  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.246249  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.246259  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.321520  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.321580  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.336397  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.361423  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.361625  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.382747  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:31:39.382804  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.434483  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.434505  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.494972  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:31:39.495045  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:31:39.520487  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:31:39.520534  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:31:39.529832  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:31:39.530428  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:31:39.865446  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:40.001428  993585 cache_images.go:92] duration metric: took 1.327395723s to LoadCachedImages
	W0120 12:31:40.001521  993585 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0120 12:31:40.001540  993585 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I0120 12:31:40.001666  993585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-134433 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:31:40.001759  993585 ssh_runner.go:195] Run: crio config
	I0120 12:31:40.049768  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:40.049788  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:40.049798  993585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:31:40.049817  993585 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-134433 NodeName:old-k8s-version-134433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:31:40.049953  993585 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-134433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:31:40.050035  993585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:31:40.060513  993585 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:31:40.060576  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:31:40.070416  993585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 12:31:40.086321  993585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:31:40.101428  993585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:31:40.118688  993585 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0120 12:31:40.122319  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:40.133757  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:40.267585  993585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:31:40.285307  993585 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433 for IP: 192.168.50.250
	I0120 12:31:40.285334  993585 certs.go:194] generating shared ca certs ...
	I0120 12:31:40.285359  993585 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.285629  993585 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:31:40.285712  993585 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:31:40.285729  993585 certs.go:256] generating profile certs ...
	I0120 12:31:40.285868  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key
	I0120 12:31:40.320727  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93
	I0120 12:31:40.320836  993585 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key
	I0120 12:31:40.321012  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:31:40.321045  993585 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:31:40.321055  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:31:40.321077  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:31:40.321112  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:31:40.321133  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:31:40.321173  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:40.321820  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:31:40.355849  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:31:40.384987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:31:40.412042  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:31:40.443057  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:31:40.487592  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:31:40.524256  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:31:40.548205  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:31:40.570407  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:31:40.594640  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:31:40.617736  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:31:40.642388  993585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:31:40.658180  993585 ssh_runner.go:195] Run: openssl version
	I0120 12:31:40.663613  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:31:40.673079  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677607  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677688  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.684863  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:31:40.694838  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:31:40.704251  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708616  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708671  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.714178  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:31:40.723770  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:31:40.733248  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737473  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737526  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.742896  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:31:40.752426  993585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:31:40.756579  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:31:40.761769  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:31:40.766935  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:31:40.772427  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:31:40.777720  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:31:40.782945  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:31:40.788029  993585 kubeadm.go:392] StartCluster: {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:40.788161  993585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:31:40.788202  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.825500  993585 cri.go:89] found id: ""
	I0120 12:31:40.825563  993585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:31:40.835567  993585 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:31:40.835588  993585 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:31:40.835635  993585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:31:40.845152  993585 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:31:40.845853  993585 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:40.846275  993585 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-134433" cluster setting kubeconfig missing "old-k8s-version-134433" context setting]
	I0120 12:31:40.846897  993585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.937033  993585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:31:40.947319  993585 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I0120 12:31:40.947380  993585 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:31:40.947395  993585 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:31:40.947453  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.984392  993585 cri.go:89] found id: ""
	I0120 12:31:40.984458  993585 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:31:41.001578  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:31:41.011794  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:31:41.011819  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:31:41.011875  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:31:41.021463  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:31:41.021518  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:31:41.030836  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:31:41.040645  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:31:41.040698  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:31:41.049821  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.058040  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:31:41.058097  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.066553  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:31:41.075225  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:31:41.075281  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:31:41.084906  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:31:41.093515  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.210064  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.666359  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.900869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:42.000812  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:42.089692  993585 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:31:42.089772  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:42.590338  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.090787  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.590769  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.090319  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.590108  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.089838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.590766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.089997  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.590717  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:47.090580  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:47.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.090251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.589947  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.090785  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.590768  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.090614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.590558  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.090311  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.590228  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.090647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.090104  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.590691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.090868  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.590219  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.090350  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.590003  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.090726  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.590283  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:57.089873  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:57.590850  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.090780  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.590614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.090635  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.590451  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.090701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.590640  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.090753  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.590644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:02.089853  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:02.590807  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.089981  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.590808  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.090857  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.590757  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.089933  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.590271  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.090623  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.590064  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:07.090783  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:07.589932  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.090055  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.590241  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.089915  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.590298  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.089954  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.590262  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.090497  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.090562  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.590135  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.090747  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.590675  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.089959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.090313  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.590672  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.090234  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.590838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.589874  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.089914  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.589959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.090841  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.590272  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.090818  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.590893  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.590656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:22.090802  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:22.589928  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.090636  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.590707  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.090639  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.590650  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.089995  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.590660  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.090132  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.590033  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:27.090577  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:27.590867  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.090984  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.590845  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.090300  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.590066  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.090684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.590040  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.090303  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.590795  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:32.090206  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:32.590714  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.090718  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.590378  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.090656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.590435  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.090317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.590516  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.090582  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.090078  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.590663  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.090428  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.089913  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.590888  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.090661  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.590041  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.090883  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.590739  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:42.090408  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.090485  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.129790  993585 cri.go:89] found id: ""
	I0120 12:32:42.129819  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.129826  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:42.129832  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.129887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.160523  993585 cri.go:89] found id: ""
	I0120 12:32:42.160546  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.160555  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:42.160560  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.160606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.194768  993585 cri.go:89] found id: ""
	I0120 12:32:42.194796  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.194803  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:42.194808  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.194878  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.226406  993585 cri.go:89] found id: ""
	I0120 12:32:42.226435  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.226443  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:42.226448  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.226497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:42.263295  993585 cri.go:89] found id: ""
	I0120 12:32:42.263328  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.263352  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:42.263362  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:42.263419  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:42.293754  993585 cri.go:89] found id: ""
	I0120 12:32:42.293784  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.293794  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:42.293803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:42.293866  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:42.327600  993585 cri.go:89] found id: ""
	I0120 12:32:42.327631  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.327642  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:42.327650  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:42.327702  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:42.356668  993585 cri.go:89] found id: ""
	I0120 12:32:42.356698  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.356710  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:42.356722  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:42.356734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:42.405030  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:42.405063  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:42.417663  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:42.417690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:42.538067  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:42.538100  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:42.538122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:42.607706  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:42.607743  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:45.149684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:45.161947  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:45.162031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:45.204014  993585 cri.go:89] found id: ""
	I0120 12:32:45.204049  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.204060  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:45.204068  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:45.204129  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:45.245164  993585 cri.go:89] found id: ""
	I0120 12:32:45.245196  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.245206  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:45.245214  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:45.245278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:45.285368  993585 cri.go:89] found id: ""
	I0120 12:32:45.285401  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.285412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:45.285420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:45.285482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:45.322496  993585 cri.go:89] found id: ""
	I0120 12:32:45.322551  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.322564  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:45.322573  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:45.322632  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:45.353693  993585 cri.go:89] found id: ""
	I0120 12:32:45.353723  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.353731  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:45.353737  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:45.353786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:45.385705  993585 cri.go:89] found id: ""
	I0120 12:32:45.385735  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.385744  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:45.385750  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:45.385800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:45.419199  993585 cri.go:89] found id: ""
	I0120 12:32:45.419233  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.419243  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:45.419251  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:45.419317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:45.453757  993585 cri.go:89] found id: ""
	I0120 12:32:45.453789  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.453800  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:45.453813  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:45.453828  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:45.502873  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:45.502902  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:45.515215  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:45.515240  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:45.581415  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:45.581443  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:45.581458  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:45.665418  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:45.665450  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:48.203193  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:48.215966  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:48.216028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:48.247173  993585 cri.go:89] found id: ""
	I0120 12:32:48.247201  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.247212  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:48.247219  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:48.247280  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:48.279393  993585 cri.go:89] found id: ""
	I0120 12:32:48.279421  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.279428  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:48.279434  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:48.279488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:48.310392  993585 cri.go:89] found id: ""
	I0120 12:32:48.310416  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.310423  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:48.310429  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:48.310473  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:48.342762  993585 cri.go:89] found id: ""
	I0120 12:32:48.342794  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.342803  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:48.342811  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:48.342869  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:48.373905  993585 cri.go:89] found id: ""
	I0120 12:32:48.373931  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.373942  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:48.373952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:48.374016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:48.406406  993585 cri.go:89] found id: ""
	I0120 12:32:48.406435  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.406443  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:48.406449  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:48.406494  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:48.442695  993585 cri.go:89] found id: ""
	I0120 12:32:48.442728  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.442738  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:48.442746  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:48.442813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:48.474459  993585 cri.go:89] found id: ""
	I0120 12:32:48.474485  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.474494  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:48.474506  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:48.474535  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:48.522305  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:48.522337  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:48.535295  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:48.535322  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:48.605460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.605493  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:48.605510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:48.689980  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:48.690012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.228008  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:51.240647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:51.240708  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:51.274219  993585 cri.go:89] found id: ""
	I0120 12:32:51.274255  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.274267  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:51.274275  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:51.274347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:51.307904  993585 cri.go:89] found id: ""
	I0120 12:32:51.307930  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.307939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:51.307948  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:51.308000  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:51.342253  993585 cri.go:89] found id: ""
	I0120 12:32:51.342280  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.342288  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:51.342294  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:51.342340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:51.372185  993585 cri.go:89] found id: ""
	I0120 12:32:51.372211  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.372218  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:51.372224  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:51.372268  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:51.402807  993585 cri.go:89] found id: ""
	I0120 12:32:51.402840  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.402851  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:51.402858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:51.402932  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:51.434101  993585 cri.go:89] found id: ""
	I0120 12:32:51.434129  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.434139  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:51.434147  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:51.434217  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:51.467394  993585 cri.go:89] found id: ""
	I0120 12:32:51.467422  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.467431  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:51.467438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:51.467505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:51.498551  993585 cri.go:89] found id: ""
	I0120 12:32:51.498582  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.498592  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:51.498604  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:51.498619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:51.577501  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:51.577533  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.618784  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:51.618825  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:51.671630  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:51.671667  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:51.685726  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:51.685750  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:51.751392  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:54.251524  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.265218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.265281  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.299773  993585 cri.go:89] found id: ""
	I0120 12:32:54.299804  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.299813  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:54.299820  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.299867  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.330432  993585 cri.go:89] found id: ""
	I0120 12:32:54.330461  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.330471  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:54.330479  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.330565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.366364  993585 cri.go:89] found id: ""
	I0120 12:32:54.366400  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.366412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:54.366420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:54.366480  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:54.398373  993585 cri.go:89] found id: ""
	I0120 12:32:54.398407  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.398417  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:54.398425  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:54.398486  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:54.437033  993585 cri.go:89] found id: ""
	I0120 12:32:54.437064  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.437074  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:54.437081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:54.437141  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:54.475179  993585 cri.go:89] found id: ""
	I0120 12:32:54.475203  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.475211  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:54.475218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:54.475276  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:54.507372  993585 cri.go:89] found id: ""
	I0120 12:32:54.507410  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.507420  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:54.507428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:54.507484  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:54.538317  993585 cri.go:89] found id: ""
	I0120 12:32:54.538351  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.538362  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:54.538379  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:54.538400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:54.620638  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:54.620683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:54.657830  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:54.657859  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:54.707420  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:54.707448  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:54.719611  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:54.719640  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:54.784727  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:57.285771  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:57.298606  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:57.298677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:57.330216  993585 cri.go:89] found id: ""
	I0120 12:32:57.330245  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.330254  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:57.330260  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:57.330317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:57.362111  993585 cri.go:89] found id: ""
	I0120 12:32:57.362152  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.362162  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:57.362169  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:57.362220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:57.395597  993585 cri.go:89] found id: ""
	I0120 12:32:57.395624  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.395634  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:57.395640  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:57.395700  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:57.425897  993585 cri.go:89] found id: ""
	I0120 12:32:57.425925  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.425933  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:57.425939  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:57.425986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:57.458500  993585 cri.go:89] found id: ""
	I0120 12:32:57.458544  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.458554  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:57.458563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:57.458625  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:57.489583  993585 cri.go:89] found id: ""
	I0120 12:32:57.489616  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.489626  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:57.489634  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:57.489685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:57.520588  993585 cri.go:89] found id: ""
	I0120 12:32:57.520617  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.520624  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:57.520630  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:57.520676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:57.555799  993585 cri.go:89] found id: ""
	I0120 12:32:57.555824  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.555833  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:57.555843  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:57.555855  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:57.605038  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:57.605071  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:57.619575  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:57.619603  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:57.686685  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:57.686703  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:57.686731  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:57.762968  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:57.763003  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:00.306647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:00.321029  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:00.321083  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:00.355924  993585 cri.go:89] found id: ""
	I0120 12:33:00.355954  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.355963  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:00.355969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:00.356021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:00.390766  993585 cri.go:89] found id: ""
	I0120 12:33:00.390793  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.390801  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:00.390807  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:00.390855  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:00.424790  993585 cri.go:89] found id: ""
	I0120 12:33:00.424820  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.424828  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:00.424833  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:00.424880  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:00.454941  993585 cri.go:89] found id: ""
	I0120 12:33:00.454975  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.454987  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:00.454995  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:00.455056  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:00.488642  993585 cri.go:89] found id: ""
	I0120 12:33:00.488670  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.488679  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:00.488684  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:00.488731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:00.518470  993585 cri.go:89] found id: ""
	I0120 12:33:00.518501  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.518511  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:00.518535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:00.518595  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:00.554139  993585 cri.go:89] found id: ""
	I0120 12:33:00.554167  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.554174  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:00.554180  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:00.554236  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:00.587766  993585 cri.go:89] found id: ""
	I0120 12:33:00.587792  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.587799  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:00.587809  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:00.587821  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:00.639504  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:00.639541  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:00.651660  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:00.651687  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:00.725669  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:00.725697  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:00.725716  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:00.806460  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:00.806496  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:03.341420  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:03.354948  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:03.355022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:03.389867  993585 cri.go:89] found id: ""
	I0120 12:33:03.389965  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.389977  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:03.389986  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:03.390042  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:03.421478  993585 cri.go:89] found id: ""
	I0120 12:33:03.421505  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.421517  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:03.421525  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:03.421593  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:03.453805  993585 cri.go:89] found id: ""
	I0120 12:33:03.453838  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.453850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:03.453858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:03.453917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:03.487503  993585 cri.go:89] found id: ""
	I0120 12:33:03.487536  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.487547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:03.487555  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:03.487621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:03.517560  993585 cri.go:89] found id: ""
	I0120 12:33:03.517585  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.517594  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:03.517602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:03.517661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:03.547328  993585 cri.go:89] found id: ""
	I0120 12:33:03.547368  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.547380  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:03.547389  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:03.547447  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:03.580215  993585 cri.go:89] found id: ""
	I0120 12:33:03.580242  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.580251  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:03.580256  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:03.580319  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:03.613176  993585 cri.go:89] found id: ""
	I0120 12:33:03.613208  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.613220  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:03.613233  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:03.613247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:03.667093  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:03.667129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:03.680234  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:03.680260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:03.744763  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:03.744788  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:03.744805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.824813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:03.824856  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.364296  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:06.377247  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:06.377314  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:06.408701  993585 cri.go:89] found id: ""
	I0120 12:33:06.408725  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.408733  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:06.408738  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:06.408800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:06.440716  993585 cri.go:89] found id: ""
	I0120 12:33:06.440744  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.440752  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:06.440758  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:06.440811  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:06.471832  993585 cri.go:89] found id: ""
	I0120 12:33:06.471866  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.471877  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:06.471884  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:06.471947  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:06.504122  993585 cri.go:89] found id: ""
	I0120 12:33:06.504149  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.504157  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:06.504163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:06.504214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:06.535353  993585 cri.go:89] found id: ""
	I0120 12:33:06.535386  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.535397  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:06.535405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:06.535460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:06.571284  993585 cri.go:89] found id: ""
	I0120 12:33:06.571309  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.571316  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:06.571322  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:06.571379  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:06.604008  993585 cri.go:89] found id: ""
	I0120 12:33:06.604042  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.604055  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:06.604062  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:06.604142  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:06.636221  993585 cri.go:89] found id: ""
	I0120 12:33:06.636258  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.636270  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:06.636284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:06.636299  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.671820  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:06.671845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:06.723338  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:06.723369  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:06.736258  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:06.736285  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:06.807310  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:06.807336  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:06.807352  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:09.386909  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:09.399300  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:09.399363  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:09.431976  993585 cri.go:89] found id: ""
	I0120 12:33:09.432013  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.432025  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:09.432032  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:09.432085  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:09.468016  993585 cri.go:89] found id: ""
	I0120 12:33:09.468042  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.468053  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:09.468061  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:09.468124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:09.501613  993585 cri.go:89] found id: ""
	I0120 12:33:09.501648  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.501657  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:09.501667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:09.501734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:09.535261  993585 cri.go:89] found id: ""
	I0120 12:33:09.535296  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.535308  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:09.535315  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:09.535382  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:09.569838  993585 cri.go:89] found id: ""
	I0120 12:33:09.569873  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.569885  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:09.569893  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:09.569961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:09.601673  993585 cri.go:89] found id: ""
	I0120 12:33:09.601701  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.601709  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:09.601714  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:09.601773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:09.638035  993585 cri.go:89] found id: ""
	I0120 12:33:09.638068  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.638080  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:09.638089  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:09.638155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:09.671128  993585 cri.go:89] found id: ""
	I0120 12:33:09.671149  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.671156  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:09.671165  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:09.671178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:09.723616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:09.723648  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:09.737987  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:09.738020  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:09.810583  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:09.810613  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:09.810627  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:09.887641  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:09.887676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:12.423728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:12.437277  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:12.437368  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:12.470427  993585 cri.go:89] found id: ""
	I0120 12:33:12.470455  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.470463  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:12.470468  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:12.470546  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:12.501063  993585 cri.go:89] found id: ""
	I0120 12:33:12.501103  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.501130  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:12.501138  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:12.501287  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:12.535254  993585 cri.go:89] found id: ""
	I0120 12:33:12.535284  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.535295  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:12.535303  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:12.535354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:12.568250  993585 cri.go:89] found id: ""
	I0120 12:33:12.568289  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.568301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:12.568307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:12.568372  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:12.599927  993585 cri.go:89] found id: ""
	I0120 12:33:12.599961  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.599970  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:12.599976  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:12.600031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:12.632502  993585 cri.go:89] found id: ""
	I0120 12:33:12.632537  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.632549  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:12.632559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:12.632620  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:12.664166  993585 cri.go:89] found id: ""
	I0120 12:33:12.664200  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.664208  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:12.664216  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:12.664270  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:12.697996  993585 cri.go:89] found id: ""
	I0120 12:33:12.698028  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.698039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:12.698054  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:12.698070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:12.751712  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:12.751745  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:12.765184  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:12.765213  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:12.830999  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:12.831027  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:12.831046  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:12.911211  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:12.911244  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:15.449634  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:15.464863  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:15.464931  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:15.495576  993585 cri.go:89] found id: ""
	I0120 12:33:15.495609  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.495620  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:15.495629  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:15.495689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:15.525730  993585 cri.go:89] found id: ""
	I0120 12:33:15.525757  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.525767  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:15.525775  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:15.525832  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:15.556077  993585 cri.go:89] found id: ""
	I0120 12:33:15.556117  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.556127  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:15.556135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:15.556195  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:15.585820  993585 cri.go:89] found id: ""
	I0120 12:33:15.585852  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.585860  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:15.585867  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:15.585924  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:15.615985  993585 cri.go:89] found id: ""
	I0120 12:33:15.616027  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.616035  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:15.616041  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:15.616093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:15.648570  993585 cri.go:89] found id: ""
	I0120 12:33:15.648604  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.648611  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:15.648617  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:15.648664  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:15.678674  993585 cri.go:89] found id: ""
	I0120 12:33:15.678704  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.678714  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:15.678721  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:15.678786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:15.708444  993585 cri.go:89] found id: ""
	I0120 12:33:15.708468  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.708476  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:15.708485  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:15.708500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:15.758053  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:15.758083  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:15.770661  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:15.770688  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:15.833234  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:15.833257  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:15.833271  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:15.906939  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:15.906969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:18.442922  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:18.455489  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:18.455557  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:18.495102  993585 cri.go:89] found id: ""
	I0120 12:33:18.495135  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.495145  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:18.495154  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:18.495225  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:18.530047  993585 cri.go:89] found id: ""
	I0120 12:33:18.530078  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.530094  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:18.530102  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:18.530165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:18.566556  993585 cri.go:89] found id: ""
	I0120 12:33:18.566585  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.566595  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:18.566602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:18.566661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:18.604783  993585 cri.go:89] found id: ""
	I0120 12:33:18.604819  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.604834  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:18.604842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:18.604913  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:18.638998  993585 cri.go:89] found id: ""
	I0120 12:33:18.639025  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.639036  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:18.639043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:18.639107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:18.669083  993585 cri.go:89] found id: ""
	I0120 12:33:18.669121  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.669130  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:18.669136  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:18.669192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:18.701062  993585 cri.go:89] found id: ""
	I0120 12:33:18.701089  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.701097  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:18.701115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:18.701180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:18.732086  993585 cri.go:89] found id: ""
	I0120 12:33:18.732131  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.732142  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:18.732157  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:18.732174  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:18.779325  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:18.779357  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:18.792530  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:18.792565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:18.863429  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:18.863452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:18.863464  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:18.941343  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:18.941375  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:21.481380  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:21.493618  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:21.493699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:21.524040  993585 cri.go:89] found id: ""
	I0120 12:33:21.524067  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.524075  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:21.524081  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:21.524149  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:21.554666  993585 cri.go:89] found id: ""
	I0120 12:33:21.554698  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.554708  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:21.554715  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:21.554762  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:21.585584  993585 cri.go:89] found id: ""
	I0120 12:33:21.585610  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.585617  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:21.585623  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:21.585670  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:21.615611  993585 cri.go:89] found id: ""
	I0120 12:33:21.615646  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.615657  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:21.615666  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:21.615715  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:21.646761  993585 cri.go:89] found id: ""
	I0120 12:33:21.646788  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.646796  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:21.646801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:21.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:21.681380  993585 cri.go:89] found id: ""
	I0120 12:33:21.681410  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.681420  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:21.681428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:21.681488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:21.712708  993585 cri.go:89] found id: ""
	I0120 12:33:21.712743  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.712759  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:21.712766  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:21.712828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:21.746105  993585 cri.go:89] found id: ""
	I0120 12:33:21.746132  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.746140  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:21.746150  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:21.746162  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:21.795702  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:21.795744  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:21.807548  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:21.807570  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:21.869605  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:21.869627  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:21.869646  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:21.941092  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:21.941120  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:24.487520  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:24.501031  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:24.501119  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:24.533191  993585 cri.go:89] found id: ""
	I0120 12:33:24.533220  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.533230  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:24.533237  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:24.533300  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:24.565809  993585 cri.go:89] found id: ""
	I0120 12:33:24.565837  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.565845  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:24.565850  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:24.565902  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:24.600607  993585 cri.go:89] found id: ""
	I0120 12:33:24.600643  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.600655  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:24.600663  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:24.600742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:24.637320  993585 cri.go:89] found id: ""
	I0120 12:33:24.637354  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.637365  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:24.637373  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:24.637433  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:24.674906  993585 cri.go:89] found id: ""
	I0120 12:33:24.674940  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.674952  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:24.674960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:24.675024  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:24.707058  993585 cri.go:89] found id: ""
	I0120 12:33:24.707084  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.707091  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:24.707097  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:24.707159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:24.740554  993585 cri.go:89] found id: ""
	I0120 12:33:24.740590  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.740603  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:24.740614  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:24.740680  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:24.773021  993585 cri.go:89] found id: ""
	I0120 12:33:24.773052  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.773064  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:24.773077  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:24.773094  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:24.863129  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:24.863156  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:24.863169  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:24.939479  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:24.939516  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:24.975325  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:24.975358  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:25.026952  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:25.026993  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:27.539957  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:27.553387  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:27.553449  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:27.587773  993585 cri.go:89] found id: ""
	I0120 12:33:27.587804  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.587812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:27.587818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:27.587868  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:27.617735  993585 cri.go:89] found id: ""
	I0120 12:33:27.617767  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.617777  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:27.617785  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:27.617865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:27.652958  993585 cri.go:89] found id: ""
	I0120 12:33:27.652978  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.652985  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:27.652990  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:27.653047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:27.686924  993585 cri.go:89] found id: ""
	I0120 12:33:27.686947  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.686954  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:27.686960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:27.687012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:27.720217  993585 cri.go:89] found id: ""
	I0120 12:33:27.720246  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.720258  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:27.720265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:27.720334  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:27.757382  993585 cri.go:89] found id: ""
	I0120 12:33:27.757418  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.757430  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:27.757438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:27.757504  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:27.788498  993585 cri.go:89] found id: ""
	I0120 12:33:27.788528  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.788538  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:27.788546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:27.788616  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:27.820146  993585 cri.go:89] found id: ""
	I0120 12:33:27.820178  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.820186  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:27.820196  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:27.820207  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:27.832201  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:27.832225  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:27.905179  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:27.905202  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:27.905227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:27.984792  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:27.984829  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:28.027290  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:28.027397  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.578691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:30.591302  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:30.591365  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:30.627747  993585 cri.go:89] found id: ""
	I0120 12:33:30.627775  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.627802  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:30.627810  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:30.627881  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:30.674653  993585 cri.go:89] found id: ""
	I0120 12:33:30.674684  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.674694  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:30.674702  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:30.674766  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:30.716811  993585 cri.go:89] found id: ""
	I0120 12:33:30.716839  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.716850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:30.716857  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:30.716922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:30.749623  993585 cri.go:89] found id: ""
	I0120 12:33:30.749655  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.749666  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:30.749674  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:30.749742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:30.780140  993585 cri.go:89] found id: ""
	I0120 12:33:30.780172  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.780180  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:30.780186  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:30.780241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:30.808356  993585 cri.go:89] found id: ""
	I0120 12:33:30.808387  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.808395  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:30.808407  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:30.808476  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:30.842019  993585 cri.go:89] found id: ""
	I0120 12:33:30.842047  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.842054  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:30.842060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:30.842109  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:30.871526  993585 cri.go:89] found id: ""
	I0120 12:33:30.871551  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.871559  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:30.871568  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:30.871581  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.919022  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:30.919051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:30.931897  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:30.931933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:30.993261  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:30.993282  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:30.993296  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:31.069346  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:31.069384  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:33.606755  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:33.619163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:33.619232  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:33.654390  993585 cri.go:89] found id: ""
	I0120 12:33:33.654423  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.654432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:33.654438  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:33.654487  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:33.689183  993585 cri.go:89] found id: ""
	I0120 12:33:33.689218  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.689230  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:33.689239  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:33.689302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:33.720803  993585 cri.go:89] found id: ""
	I0120 12:33:33.720832  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.720839  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:33.720845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:33.720893  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:33.755948  993585 cri.go:89] found id: ""
	I0120 12:33:33.755985  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.755995  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:33.756003  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:33.756071  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:33.788407  993585 cri.go:89] found id: ""
	I0120 12:33:33.788444  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.788457  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:33.788466  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:33.788524  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:33.819077  993585 cri.go:89] found id: ""
	I0120 12:33:33.819102  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.819109  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:33.819115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:33.819164  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:33.848263  993585 cri.go:89] found id: ""
	I0120 12:33:33.848288  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.848296  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:33.848301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:33.848347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:33.877393  993585 cri.go:89] found id: ""
	I0120 12:33:33.877428  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.877439  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:33.877451  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:33.877462  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:33.928766  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:33.928796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:33.941450  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:33.941474  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:34.004416  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:34.004446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:34.004461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:34.079056  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:34.079088  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:36.622644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:36.634862  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:36.634939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:36.670074  993585 cri.go:89] found id: ""
	I0120 12:33:36.670113  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.670124  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:36.670132  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:36.670189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:36.706117  993585 cri.go:89] found id: ""
	I0120 12:33:36.706152  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.706159  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:36.706164  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:36.706219  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:36.741133  993585 cri.go:89] found id: ""
	I0120 12:33:36.741167  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.741177  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:36.741185  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:36.741242  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:36.773791  993585 cri.go:89] found id: ""
	I0120 12:33:36.773819  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.773830  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:36.773837  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:36.773901  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:36.807401  993585 cri.go:89] found id: ""
	I0120 12:33:36.807432  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.807440  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:36.807447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:36.807500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:36.839815  993585 cri.go:89] found id: ""
	I0120 12:33:36.839850  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.839861  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:36.839870  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:36.839934  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:36.868579  993585 cri.go:89] found id: ""
	I0120 12:33:36.868610  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.868620  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:36.868626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:36.868685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:36.898430  993585 cri.go:89] found id: ""
	I0120 12:33:36.898455  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.898462  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:36.898475  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:36.898490  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:36.947718  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:36.947758  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:36.962705  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:36.962740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:37.053761  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:37.053792  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:37.053805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:37.148364  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:37.148400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.690060  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:39.702447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:39.702516  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:39.733846  993585 cri.go:89] found id: ""
	I0120 12:33:39.733868  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.733876  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:39.733883  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:39.733939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:39.762657  993585 cri.go:89] found id: ""
	I0120 12:33:39.762682  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.762690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:39.762695  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:39.762743  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:39.794803  993585 cri.go:89] found id: ""
	I0120 12:33:39.794832  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.794841  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:39.794847  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:39.794891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:39.823584  993585 cri.go:89] found id: ""
	I0120 12:33:39.823614  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.823625  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:39.823633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:39.823689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:39.851954  993585 cri.go:89] found id: ""
	I0120 12:33:39.851978  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.851985  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:39.851991  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:39.852091  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:39.881315  993585 cri.go:89] found id: ""
	I0120 12:33:39.881347  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.881358  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:39.881367  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:39.881428  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:39.911797  993585 cri.go:89] found id: ""
	I0120 12:33:39.911827  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.911836  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:39.911841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:39.911887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:39.941625  993585 cri.go:89] found id: ""
	I0120 12:33:39.941653  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.941661  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:39.941671  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:39.941683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:39.991689  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:39.991718  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:40.004850  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:40.004871  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:40.069863  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:40.069883  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:40.069894  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:40.149093  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:40.149129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:42.692596  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:42.710550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:42.710636  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:42.761626  993585 cri.go:89] found id: ""
	I0120 12:33:42.761665  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.761677  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:42.761685  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:42.761753  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:42.825148  993585 cri.go:89] found id: ""
	I0120 12:33:42.825181  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.825191  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:42.825196  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:42.825258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:42.859035  993585 cri.go:89] found id: ""
	I0120 12:33:42.859066  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.859075  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:42.859081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:42.859134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:42.890335  993585 cri.go:89] found id: ""
	I0120 12:33:42.890364  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.890372  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:42.890378  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:42.890442  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:42.929857  993585 cri.go:89] found id: ""
	I0120 12:33:42.929882  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.929890  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:42.929896  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:42.929944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:42.960830  993585 cri.go:89] found id: ""
	I0120 12:33:42.960864  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.960874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:42.960882  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:42.960948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:42.995324  993585 cri.go:89] found id: ""
	I0120 12:33:42.995354  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.995368  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:42.995374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:42.995424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:43.028259  993585 cri.go:89] found id: ""
	I0120 12:33:43.028286  993585 logs.go:282] 0 containers: []
	W0120 12:33:43.028294  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:43.028306  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:43.028316  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:43.079487  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:43.079517  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.091452  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:43.091475  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:43.153152  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:43.153178  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:43.153192  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:43.236284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:43.236325  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:45.774706  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:45.791967  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:45.792052  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:45.824678  993585 cri.go:89] found id: ""
	I0120 12:33:45.824710  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.824720  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:45.824729  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:45.824793  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:45.857843  993585 cri.go:89] found id: ""
	I0120 12:33:45.857876  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.857885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:45.857891  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:45.857944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:45.898182  993585 cri.go:89] found id: ""
	I0120 12:33:45.898215  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.898227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:45.898235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:45.898302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:45.929223  993585 cri.go:89] found id: ""
	I0120 12:33:45.929259  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.929272  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:45.929282  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:45.929380  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:45.960800  993585 cri.go:89] found id: ""
	I0120 12:33:45.960849  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.960870  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:45.960879  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:45.960957  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:45.997846  993585 cri.go:89] found id: ""
	I0120 12:33:45.997878  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.997889  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:45.997897  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:45.997969  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:46.033227  993585 cri.go:89] found id: ""
	I0120 12:33:46.033267  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.033278  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:46.033286  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:46.033354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:46.066691  993585 cri.go:89] found id: ""
	I0120 12:33:46.066723  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.066733  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:46.066746  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:46.066763  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:46.133257  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:46.133280  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:46.133293  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:46.232667  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:46.232720  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:46.274332  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:46.274371  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:46.327098  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:46.327142  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:48.841385  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:48.854037  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:48.854105  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:48.889959  993585 cri.go:89] found id: ""
	I0120 12:33:48.889996  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.890008  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:48.890017  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:48.890084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.926271  993585 cri.go:89] found id: ""
	I0120 12:33:48.926313  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.926326  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:48.926334  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:48.926409  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:48.962768  993585 cri.go:89] found id: ""
	I0120 12:33:48.962803  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.962816  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:48.962825  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:48.962895  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:48.998039  993585 cri.go:89] found id: ""
	I0120 12:33:48.998075  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.998086  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:48.998093  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:48.998161  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:49.038710  993585 cri.go:89] found id: ""
	I0120 12:33:49.038745  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.038756  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:49.038765  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:49.038835  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:49.074829  993585 cri.go:89] found id: ""
	I0120 12:33:49.074863  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.074874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:49.074883  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:49.074950  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:49.115354  993585 cri.go:89] found id: ""
	I0120 12:33:49.115383  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.115392  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:49.115397  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:49.115446  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:49.152837  993585 cri.go:89] found id: ""
	I0120 12:33:49.152870  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.152880  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:49.152892  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:49.152906  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:49.194817  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:49.194842  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:49.247223  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:49.247255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:49.259939  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:49.259965  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:49.326047  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:49.326081  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:49.326108  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:51.904391  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:51.916726  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:51.916806  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:51.950574  993585 cri.go:89] found id: ""
	I0120 12:33:51.950602  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.950610  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:51.950619  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:51.950683  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:51.982905  993585 cri.go:89] found id: ""
	I0120 12:33:51.982931  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.982939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:51.982950  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:51.982998  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:52.017989  993585 cri.go:89] found id: ""
	I0120 12:33:52.018029  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.018041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:52.018049  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:52.018117  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:52.050405  993585 cri.go:89] found id: ""
	I0120 12:33:52.050432  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.050442  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:52.050450  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:52.050540  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:52.080729  993585 cri.go:89] found id: ""
	I0120 12:33:52.080760  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.080767  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:52.080773  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:52.080826  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:52.110809  993585 cri.go:89] found id: ""
	I0120 12:33:52.110839  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.110849  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:52.110856  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:52.110915  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:52.143357  993585 cri.go:89] found id: ""
	I0120 12:33:52.143387  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.143397  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:52.143405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:52.143475  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:52.179555  993585 cri.go:89] found id: ""
	I0120 12:33:52.179584  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.179594  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:52.179607  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:52.179622  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:52.268223  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:52.268257  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.304968  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:52.305008  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:52.354773  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:52.354811  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:52.366909  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:52.366933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:52.434038  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:54.934844  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:54.954370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:54.954453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:54.987088  993585 cri.go:89] found id: ""
	I0120 12:33:54.987124  993585 logs.go:282] 0 containers: []
	W0120 12:33:54.987136  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:54.987144  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:54.987207  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:55.020248  993585 cri.go:89] found id: ""
	I0120 12:33:55.020282  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.020293  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:55.020301  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:55.020374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:55.059488  993585 cri.go:89] found id: ""
	I0120 12:33:55.059529  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.059541  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:55.059550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:55.059614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:55.095049  993585 cri.go:89] found id: ""
	I0120 12:33:55.095088  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.095102  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:55.095112  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:55.095189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:55.131993  993585 cri.go:89] found id: ""
	I0120 12:33:55.132028  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.132039  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:55.132045  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:55.132107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:55.168716  993585 cri.go:89] found id: ""
	I0120 12:33:55.168744  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.168755  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:55.168764  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:55.168828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:55.211532  993585 cri.go:89] found id: ""
	I0120 12:33:55.211566  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.211578  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:55.211591  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:55.211658  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:55.245961  993585 cri.go:89] found id: ""
	I0120 12:33:55.245993  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.246004  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:55.246019  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:55.246036  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:55.297819  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:55.297865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:55.314469  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:55.314514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:55.386489  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:55.386544  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:55.386566  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:55.466897  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:55.466954  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:58.014588  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:58.032828  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:58.032905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:58.075631  993585 cri.go:89] found id: ""
	I0120 12:33:58.075671  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.075774  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:58.075801  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:58.075887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:58.117897  993585 cri.go:89] found id: ""
	I0120 12:33:58.117934  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.117945  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:58.117953  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:58.118022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:58.161106  993585 cri.go:89] found id: ""
	I0120 12:33:58.161138  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.161149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:58.161157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:58.161222  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:58.203869  993585 cri.go:89] found id: ""
	I0120 12:33:58.203905  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.203915  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:58.203923  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:58.203991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:58.247905  993585 cri.go:89] found id: ""
	I0120 12:33:58.247938  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.247949  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:58.247956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:58.248016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:58.281395  993585 cri.go:89] found id: ""
	I0120 12:33:58.281426  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.281437  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:58.281445  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:58.281506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:58.318950  993585 cri.go:89] found id: ""
	I0120 12:33:58.318982  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.318991  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:58.318996  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:58.319055  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:58.351052  993585 cri.go:89] found id: ""
	I0120 12:33:58.351080  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.351089  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:58.351107  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:58.351134  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:58.363459  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:58.363489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:58.427460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:58.427502  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:58.427520  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:58.502031  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:58.502065  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:58.539404  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:58.539434  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.093414  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:01.106353  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:01.106422  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:01.145552  993585 cri.go:89] found id: ""
	I0120 12:34:01.145588  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.145601  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:01.145610  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:01.145678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:01.179253  993585 cri.go:89] found id: ""
	I0120 12:34:01.179288  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.179299  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:01.179307  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:01.179374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:01.215878  993585 cri.go:89] found id: ""
	I0120 12:34:01.215916  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.215928  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:01.215937  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:01.216001  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:01.260751  993585 cri.go:89] found id: ""
	I0120 12:34:01.260783  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.260795  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:01.260807  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:01.260883  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:01.303022  993585 cri.go:89] found id: ""
	I0120 12:34:01.303053  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.303065  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:01.303074  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:01.303145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:01.342483  993585 cri.go:89] found id: ""
	I0120 12:34:01.342539  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.342552  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:01.342562  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:01.342642  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:01.374569  993585 cri.go:89] found id: ""
	I0120 12:34:01.374608  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.374618  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:01.374633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:01.374696  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:01.406807  993585 cri.go:89] found id: ""
	I0120 12:34:01.406838  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.406848  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:01.406862  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:01.406887  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:01.446081  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:01.446111  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.498826  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:01.498865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:01.512333  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:01.512370  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:01.591631  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:01.591658  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:01.591676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:04.171834  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.189904  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:04.189975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:04.227671  993585 cri.go:89] found id: ""
	I0120 12:34:04.227705  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.227717  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:04.227725  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:04.227789  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:04.266288  993585 cri.go:89] found id: ""
	I0120 12:34:04.266319  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.266329  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:04.266337  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:04.266415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:04.303909  993585 cri.go:89] found id: ""
	I0120 12:34:04.303944  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.303952  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:04.303965  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:04.304029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:04.342095  993585 cri.go:89] found id: ""
	I0120 12:34:04.342135  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.342148  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:04.342156  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:04.342220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:04.374237  993585 cri.go:89] found id: ""
	I0120 12:34:04.374268  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.374290  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:04.374299  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:04.374383  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:04.407930  993585 cri.go:89] found id: ""
	I0120 12:34:04.407962  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.407973  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:04.407981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:04.408047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:04.444108  993585 cri.go:89] found id: ""
	I0120 12:34:04.444133  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.444140  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:04.444146  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:04.444208  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:04.482725  993585 cri.go:89] found id: ""
	I0120 12:34:04.482759  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.482770  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:04.482783  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:04.482796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:04.536692  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:04.536732  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:04.549928  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:04.549952  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:04.616622  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:04.616645  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:04.616661  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:04.701813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:04.701846  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:07.245120  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:07.257846  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:07.257917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:07.293851  993585 cri.go:89] found id: ""
	I0120 12:34:07.293885  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.293898  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:07.293906  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:07.293970  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:07.328532  993585 cri.go:89] found id: ""
	I0120 12:34:07.328568  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.328579  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:07.328588  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:07.328652  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:07.362019  993585 cri.go:89] found id: ""
	I0120 12:34:07.362053  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.362065  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:07.362073  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:07.362136  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:07.394170  993585 cri.go:89] found id: ""
	I0120 12:34:07.394211  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.394223  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:07.394231  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:07.394303  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:07.426650  993585 cri.go:89] found id: ""
	I0120 12:34:07.426694  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.426711  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:07.426719  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:07.426786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:07.472659  993585 cri.go:89] found id: ""
	I0120 12:34:07.472695  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.472706  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:07.472715  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:07.472788  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:07.506741  993585 cri.go:89] found id: ""
	I0120 12:34:07.506768  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.506777  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:07.506782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:07.506845  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:07.543976  993585 cri.go:89] found id: ""
	I0120 12:34:07.544007  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.544017  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:07.544028  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:07.544039  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:07.618073  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:07.618109  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:07.633284  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:07.633332  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:07.703104  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:07.703134  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:07.703151  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:07.786367  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:07.786404  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:10.324611  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:10.337443  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:10.337513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:10.371387  993585 cri.go:89] found id: ""
	I0120 12:34:10.371421  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.371432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:10.371489  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:10.371545  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:10.403803  993585 cri.go:89] found id: ""
	I0120 12:34:10.403829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.403837  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:10.403843  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:10.403891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:10.434806  993585 cri.go:89] found id: ""
	I0120 12:34:10.434829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.434836  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:10.434841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:10.434897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:10.465821  993585 cri.go:89] found id: ""
	I0120 12:34:10.465849  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.465856  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:10.465861  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:10.465905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:10.497007  993585 cri.go:89] found id: ""
	I0120 12:34:10.497029  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.497037  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:10.497043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:10.497086  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:10.527026  993585 cri.go:89] found id: ""
	I0120 12:34:10.527050  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.527060  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:10.527069  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:10.527134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:10.557590  993585 cri.go:89] found id: ""
	I0120 12:34:10.557621  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.557631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:10.557638  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:10.557694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:10.587747  993585 cri.go:89] found id: ""
	I0120 12:34:10.587777  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.587787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:10.587799  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:10.587813  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:10.635855  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:10.635886  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:10.649110  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:10.649147  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:10.719339  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:10.719382  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:10.719399  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:10.791808  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:10.791839  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:13.343317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:13.356667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:13.356731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:13.388894  993585 cri.go:89] found id: ""
	I0120 12:34:13.388926  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.388937  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:13.388944  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:13.389013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:13.419319  993585 cri.go:89] found id: ""
	I0120 12:34:13.419350  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.419360  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:13.419374  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:13.419440  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:13.451302  993585 cri.go:89] found id: ""
	I0120 12:34:13.451328  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.451335  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:13.451345  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:13.451398  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:13.485033  993585 cri.go:89] found id: ""
	I0120 12:34:13.485062  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.485073  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:13.485079  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:13.485126  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:13.515362  993585 cri.go:89] found id: ""
	I0120 12:34:13.515392  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.515401  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:13.515410  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:13.515481  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:13.545307  993585 cri.go:89] found id: ""
	I0120 12:34:13.545356  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.545366  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:13.545374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:13.545436  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:13.575714  993585 cri.go:89] found id: ""
	I0120 12:34:13.575738  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.575745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:13.575751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:13.575805  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:13.606046  993585 cri.go:89] found id: ""
	I0120 12:34:13.606099  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.606112  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:13.606127  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:13.606145  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:13.667543  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:13.667567  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:13.667584  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:13.741766  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:13.741795  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:13.778095  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:13.778131  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:13.830514  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:13.830554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.343728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:16.356586  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:16.356665  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:16.390098  993585 cri.go:89] found id: ""
	I0120 12:34:16.390132  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.390146  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:16.390155  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:16.390228  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:16.422651  993585 cri.go:89] found id: ""
	I0120 12:34:16.422682  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.422690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:16.422699  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:16.422755  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:16.455349  993585 cri.go:89] found id: ""
	I0120 12:34:16.455378  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.455390  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:16.455398  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:16.455467  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:16.494862  993585 cri.go:89] found id: ""
	I0120 12:34:16.494893  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.494904  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:16.494911  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:16.494975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:16.526039  993585 cri.go:89] found id: ""
	I0120 12:34:16.526070  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.526079  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:16.526087  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:16.526159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:16.557323  993585 cri.go:89] found id: ""
	I0120 12:34:16.557360  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.557376  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:16.557382  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:16.557444  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:16.607483  993585 cri.go:89] found id: ""
	I0120 12:34:16.607516  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.607527  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:16.607535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:16.607600  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:16.639620  993585 cri.go:89] found id: ""
	I0120 12:34:16.639644  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.639654  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:16.639665  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:16.639681  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:16.675471  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:16.675500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:16.726780  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:16.726814  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.739029  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:16.739060  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:16.802705  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:16.802738  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:16.802752  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:19.379610  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:19.392739  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:19.392813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:19.423927  993585 cri.go:89] found id: ""
	I0120 12:34:19.423959  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.423971  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:19.423979  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:19.424049  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:19.455104  993585 cri.go:89] found id: ""
	I0120 12:34:19.455131  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.455140  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:19.455145  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:19.455192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:19.487611  993585 cri.go:89] found id: ""
	I0120 12:34:19.487642  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.487652  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:19.487664  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:19.487728  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:19.517582  993585 cri.go:89] found id: ""
	I0120 12:34:19.517613  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.517638  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:19.517665  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:19.517734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:19.549138  993585 cri.go:89] found id: ""
	I0120 12:34:19.549177  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.549190  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:19.549199  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:19.549263  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:19.584290  993585 cri.go:89] found id: ""
	I0120 12:34:19.584317  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.584328  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:19.584334  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:19.584384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:19.618867  993585 cri.go:89] found id: ""
	I0120 12:34:19.618900  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.618909  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:19.618915  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:19.618967  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:19.651916  993585 cri.go:89] found id: ""
	I0120 12:34:19.651956  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.651968  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:19.651981  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:19.651997  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:19.691207  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:19.691239  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:19.742403  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:19.742436  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:19.755212  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:19.755245  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:19.818642  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:19.818671  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:19.818686  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:22.398142  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:22.415423  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:22.415497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:22.450558  993585 cri.go:89] found id: ""
	I0120 12:34:22.450595  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.450606  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:22.450613  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:22.450672  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:22.481655  993585 cri.go:89] found id: ""
	I0120 12:34:22.481686  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.481697  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:22.481706  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:22.481773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:22.515465  993585 cri.go:89] found id: ""
	I0120 12:34:22.515498  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.515509  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:22.515516  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:22.515575  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:22.546538  993585 cri.go:89] found id: ""
	I0120 12:34:22.546566  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.546575  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:22.546583  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:22.546640  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:22.577112  993585 cri.go:89] found id: ""
	I0120 12:34:22.577140  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.577151  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:22.577158  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:22.577216  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:22.610604  993585 cri.go:89] found id: ""
	I0120 12:34:22.610640  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.610650  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:22.610657  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:22.610718  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:22.641708  993585 cri.go:89] found id: ""
	I0120 12:34:22.641737  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.641745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:22.641752  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:22.641818  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:22.671952  993585 cri.go:89] found id: ""
	I0120 12:34:22.671977  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.671984  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:22.671994  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:22.672004  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:22.722515  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:22.722552  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:22.734806  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:22.734827  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:22.797517  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:22.797554  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:22.797573  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:22.872821  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:22.872851  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.413129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:25.425926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:25.426021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:25.462540  993585 cri.go:89] found id: ""
	I0120 12:34:25.462574  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.462584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:25.462595  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:25.462650  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:25.493646  993585 cri.go:89] found id: ""
	I0120 12:34:25.493672  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.493679  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:25.493688  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:25.493732  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:25.529070  993585 cri.go:89] found id: ""
	I0120 12:34:25.529103  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.529126  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:25.529135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:25.529199  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:25.562199  993585 cri.go:89] found id: ""
	I0120 12:34:25.562225  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.562258  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:25.562265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:25.562329  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:25.597698  993585 cri.go:89] found id: ""
	I0120 12:34:25.597728  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.597739  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:25.597745  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:25.597794  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:25.632923  993585 cri.go:89] found id: ""
	I0120 12:34:25.632950  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.632961  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:25.632968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:25.633031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:25.664379  993585 cri.go:89] found id: ""
	I0120 12:34:25.664409  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.664419  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:25.664434  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:25.664490  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:25.694965  993585 cri.go:89] found id: ""
	I0120 12:34:25.694992  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.695002  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:25.695014  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:25.695027  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:25.742956  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:25.742987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:25.755095  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:25.755122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:25.822777  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:25.822807  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:25.822824  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:25.895354  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:25.895389  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:28.433411  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:28.445691  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:28.445750  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:28.475915  993585 cri.go:89] found id: ""
	I0120 12:34:28.475949  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.475961  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:28.475969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:28.476029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:28.506219  993585 cri.go:89] found id: ""
	I0120 12:34:28.506253  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.506264  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:28.506272  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:28.506332  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:28.539662  993585 cri.go:89] found id: ""
	I0120 12:34:28.539693  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.539704  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:28.539712  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:28.539775  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:28.570360  993585 cri.go:89] found id: ""
	I0120 12:34:28.570390  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.570398  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:28.570404  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:28.570466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:28.599217  993585 cri.go:89] found id: ""
	I0120 12:34:28.599242  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.599249  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:28.599255  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:28.599310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:28.629325  993585 cri.go:89] found id: ""
	I0120 12:34:28.629366  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.629378  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:28.629386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:28.629453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:28.659625  993585 cri.go:89] found id: ""
	I0120 12:34:28.659657  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.659668  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:28.659675  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:28.659734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:28.695195  993585 cri.go:89] found id: ""
	I0120 12:34:28.695222  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.695232  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:28.695242  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:28.695255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:28.756910  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:28.756942  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:28.771902  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:28.771932  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:28.859464  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:28.859491  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:28.859510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:28.931739  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:28.931769  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.472251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:31.484961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:31.485019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:31.518142  993585 cri.go:89] found id: ""
	I0120 12:34:31.518175  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.518187  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:31.518194  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:31.518241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:31.550125  993585 cri.go:89] found id: ""
	I0120 12:34:31.550187  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.550201  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:31.550210  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:31.550274  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:31.583805  993585 cri.go:89] found id: ""
	I0120 12:34:31.583834  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.583846  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:31.583854  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:31.583908  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:31.626186  993585 cri.go:89] found id: ""
	I0120 12:34:31.626209  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.626217  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:31.626223  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:31.626271  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:31.657467  993585 cri.go:89] found id: ""
	I0120 12:34:31.657507  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.657519  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:31.657527  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:31.657594  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:31.686983  993585 cri.go:89] found id: ""
	I0120 12:34:31.687008  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.687015  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:31.687021  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:31.687075  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:31.721602  993585 cri.go:89] found id: ""
	I0120 12:34:31.721632  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.721645  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:31.721651  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:31.721701  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:31.751369  993585 cri.go:89] found id: ""
	I0120 12:34:31.751394  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.751401  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:31.751412  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:31.751435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:31.816285  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:31.816327  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:31.816344  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:31.891930  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:31.891969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.927472  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:31.927503  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:31.974997  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:31.975024  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.488614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:34.506548  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:34.506624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:34.563005  993585 cri.go:89] found id: ""
	I0120 12:34:34.563039  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.563052  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:34.563060  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:34.563124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:34.594244  993585 cri.go:89] found id: ""
	I0120 12:34:34.594284  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.594296  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:34.594304  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:34.594373  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:34.625619  993585 cri.go:89] found id: ""
	I0120 12:34:34.625654  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.625665  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:34.625673  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:34.625738  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:34.658589  993585 cri.go:89] found id: ""
	I0120 12:34:34.658619  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.658627  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:34.658635  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:34.658703  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:34.689254  993585 cri.go:89] found id: ""
	I0120 12:34:34.689283  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.689294  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:34.689301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:34.689361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:34.718991  993585 cri.go:89] found id: ""
	I0120 12:34:34.719017  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.719025  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:34.719032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:34.719087  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:34.755470  993585 cri.go:89] found id: ""
	I0120 12:34:34.755506  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.755517  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:34.755525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:34.755591  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:34.794468  993585 cri.go:89] found id: ""
	I0120 12:34:34.794511  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.794536  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:34.794550  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:34.794567  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:34.872224  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:34.872255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:34.906752  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:34.906782  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:34.958387  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:34.958418  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.970224  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:34.970247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:35.042447  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:37.542589  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:37.559095  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:37.559165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:37.598316  993585 cri.go:89] found id: ""
	I0120 12:34:37.598348  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.598360  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:37.598369  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:37.598438  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:37.628599  993585 cri.go:89] found id: ""
	I0120 12:34:37.628633  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.628645  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:37.628652  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:37.628727  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:37.668373  993585 cri.go:89] found id: ""
	I0120 12:34:37.668415  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.668428  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:37.668436  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:37.668505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:37.708471  993585 cri.go:89] found id: ""
	I0120 12:34:37.708506  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.708517  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:37.708525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:37.708586  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:37.741568  993585 cri.go:89] found id: ""
	I0120 12:34:37.741620  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.741639  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:37.741647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:37.741722  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:37.774368  993585 cri.go:89] found id: ""
	I0120 12:34:37.774396  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.774406  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:37.774414  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:37.774482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:37.806996  993585 cri.go:89] found id: ""
	I0120 12:34:37.807031  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.807042  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:37.807050  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:37.807111  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:37.843251  993585 cri.go:89] found id: ""
	I0120 12:34:37.843285  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.843296  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:37.843317  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:37.843336  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:37.918915  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:37.918937  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:37.918949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:38.003693  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:38.003735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:38.044200  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:38.044228  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:38.098358  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:38.098396  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.611766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:40.625430  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:40.625513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:40.662291  993585 cri.go:89] found id: ""
	I0120 12:34:40.662328  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.662340  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:40.662348  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:40.662416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:40.700505  993585 cri.go:89] found id: ""
	I0120 12:34:40.700535  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.700543  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:40.700549  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:40.700621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:40.740098  993585 cri.go:89] found id: ""
	I0120 12:34:40.740156  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.740168  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:40.740177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:40.740246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:40.779511  993585 cri.go:89] found id: ""
	I0120 12:34:40.779538  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.779547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:40.779552  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:40.779602  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:40.814466  993585 cri.go:89] found id: ""
	I0120 12:34:40.814508  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.814539  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:40.814549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:40.814624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:40.848198  993585 cri.go:89] found id: ""
	I0120 12:34:40.848224  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.848233  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:40.848239  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:40.848295  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:40.881226  993585 cri.go:89] found id: ""
	I0120 12:34:40.881260  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.881273  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:40.881281  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:40.881345  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:40.914605  993585 cri.go:89] found id: ""
	I0120 12:34:40.914639  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.914649  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:40.914659  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:40.914671  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:40.967363  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:40.967401  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.981622  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:40.981655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:41.052041  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:41.052074  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:41.052089  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:41.136661  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:41.136699  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:43.674682  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:43.690652  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:43.690723  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:43.721291  993585 cri.go:89] found id: ""
	I0120 12:34:43.721323  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.721334  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:43.721342  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:43.721410  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:43.752041  993585 cri.go:89] found id: ""
	I0120 12:34:43.752065  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.752072  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:43.752078  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:43.752138  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:43.785868  993585 cri.go:89] found id: ""
	I0120 12:34:43.785901  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.785913  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:43.785920  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:43.785989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:43.815950  993585 cri.go:89] found id: ""
	I0120 12:34:43.815981  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.815991  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:43.815998  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:43.816051  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:43.846957  993585 cri.go:89] found id: ""
	I0120 12:34:43.846989  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.846998  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:43.847006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:43.847063  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:43.879933  993585 cri.go:89] found id: ""
	I0120 12:34:43.879961  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.879971  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:43.879979  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:43.880037  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:43.910895  993585 cri.go:89] found id: ""
	I0120 12:34:43.910922  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.910932  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:43.910940  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:43.911004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:43.940052  993585 cri.go:89] found id: ""
	I0120 12:34:43.940083  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.940092  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:43.940103  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:43.940119  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:43.992764  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:43.992797  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:44.004467  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:44.004489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:44.076395  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:44.076424  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:44.076440  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:44.155006  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:44.155051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:46.706685  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:46.720910  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:46.720986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:46.769398  993585 cri.go:89] found id: ""
	I0120 12:34:46.769438  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.769452  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:46.769461  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:46.769532  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:46.812658  993585 cri.go:89] found id: ""
	I0120 12:34:46.812692  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.812704  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:46.812712  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:46.812780  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:46.849224  993585 cri.go:89] found id: ""
	I0120 12:34:46.849260  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.849271  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:46.849278  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:46.849340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:46.880621  993585 cri.go:89] found id: ""
	I0120 12:34:46.880660  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.880672  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:46.880680  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:46.880754  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:46.917825  993585 cri.go:89] found id: ""
	I0120 12:34:46.917860  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.917872  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:46.917880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:46.917948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:46.953069  993585 cri.go:89] found id: ""
	I0120 12:34:46.953102  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.953114  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:46.953122  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:46.953210  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:46.991590  993585 cri.go:89] found id: ""
	I0120 12:34:46.991624  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.991636  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:46.991643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:46.991709  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:47.026992  993585 cri.go:89] found id: ""
	I0120 12:34:47.027028  993585 logs.go:282] 0 containers: []
	W0120 12:34:47.027039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:47.027052  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:47.027070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:47.041560  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:47.041600  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:47.116950  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:47.116982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:47.116999  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:47.220147  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:47.220186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:47.261692  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:47.261735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:49.823725  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:49.837812  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:49.837891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:49.870910  993585 cri.go:89] found id: ""
	I0120 12:34:49.870942  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.870954  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:49.870974  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:49.871038  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:49.901938  993585 cri.go:89] found id: ""
	I0120 12:34:49.901971  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.901983  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:49.901991  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:49.902050  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:49.934859  993585 cri.go:89] found id: ""
	I0120 12:34:49.934895  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.934908  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:49.934916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:49.934978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:49.969109  993585 cri.go:89] found id: ""
	I0120 12:34:49.969144  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.969152  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:49.969159  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:49.969215  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:50.000593  993585 cri.go:89] found id: ""
	I0120 12:34:50.000624  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.000634  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:50.000644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:50.000704  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:50.031935  993585 cri.go:89] found id: ""
	I0120 12:34:50.031956  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.031963  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:50.031968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:50.032013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:50.066876  993585 cri.go:89] found id: ""
	I0120 12:34:50.066904  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.066914  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:50.066922  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:50.066980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:50.099413  993585 cri.go:89] found id: ""
	I0120 12:34:50.099440  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.099448  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:50.099458  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:50.099469  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:50.147538  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:50.147565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:50.159202  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:50.159227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:50.233169  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:50.233201  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:50.233218  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:50.313297  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:50.313331  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:52.849232  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:52.863600  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:52.863668  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:52.897114  993585 cri.go:89] found id: ""
	I0120 12:34:52.897146  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.897158  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:52.897166  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:52.897235  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:52.931572  993585 cri.go:89] found id: ""
	I0120 12:34:52.931608  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.931621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:52.931631  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:52.931699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:52.967427  993585 cri.go:89] found id: ""
	I0120 12:34:52.967464  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.967477  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:52.967485  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:52.967550  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:53.004996  993585 cri.go:89] found id: ""
	I0120 12:34:53.005036  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.005045  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:53.005052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:53.005130  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:53.042883  993585 cri.go:89] found id: ""
	I0120 12:34:53.042920  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.042932  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:53.042941  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:53.043012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:53.081504  993585 cri.go:89] found id: ""
	I0120 12:34:53.081548  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.081560  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:53.081569  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:53.081638  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:53.116486  993585 cri.go:89] found id: ""
	I0120 12:34:53.116526  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.116537  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:53.116546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:53.116621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:53.150011  993585 cri.go:89] found id: ""
	I0120 12:34:53.150044  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.150055  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:53.150068  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:53.150082  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:53.236271  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:53.236314  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:53.272793  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:53.272823  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:53.328164  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:53.328210  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:53.342124  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:53.342159  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:53.436951  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:55.938662  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:55.954006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:55.954080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:55.995805  993585 cri.go:89] found id: ""
	I0120 12:34:55.995836  993585 logs.go:282] 0 containers: []
	W0120 12:34:55.995847  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:55.995855  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:55.995922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:56.037391  993585 cri.go:89] found id: ""
	I0120 12:34:56.037422  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.037431  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:56.037440  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:56.037500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:56.073395  993585 cri.go:89] found id: ""
	I0120 12:34:56.073432  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.073444  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:56.073452  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:56.073521  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:56.113060  993585 cri.go:89] found id: ""
	I0120 12:34:56.113095  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.113106  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:56.113114  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:56.113192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:56.149448  993585 cri.go:89] found id: ""
	I0120 12:34:56.149481  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.149492  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:56.149501  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:56.149565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:56.188193  993585 cri.go:89] found id: ""
	I0120 12:34:56.188222  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.188232  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:56.188241  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:56.188305  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:56.229490  993585 cri.go:89] found id: ""
	I0120 12:34:56.229520  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.229530  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:56.229538  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:56.229596  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:56.268312  993585 cri.go:89] found id: ""
	I0120 12:34:56.268342  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.268355  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:56.268368  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:56.268382  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:56.362946  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:56.362970  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:56.362987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:56.449009  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:56.449049  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:56.497349  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:56.497393  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:56.552829  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:56.552864  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:59.068750  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:59.085643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:59.085720  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:59.128466  993585 cri.go:89] found id: ""
	I0120 12:34:59.128566  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.128584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:59.128594  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:59.128677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:59.175838  993585 cri.go:89] found id: ""
	I0120 12:34:59.175873  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.175885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:59.175893  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:59.175961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:59.211334  993585 cri.go:89] found id: ""
	I0120 12:34:59.211371  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.211383  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:59.211392  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:59.211466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:59.248992  993585 cri.go:89] found id: ""
	I0120 12:34:59.249031  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.249043  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:59.249060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:59.249127  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:59.285229  993585 cri.go:89] found id: ""
	I0120 12:34:59.285266  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.285279  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:59.285288  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:59.285367  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:59.323049  993585 cri.go:89] found id: ""
	I0120 12:34:59.323081  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.323092  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:59.323099  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:59.323180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:59.365925  993585 cri.go:89] found id: ""
	I0120 12:34:59.365968  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.365978  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:59.365985  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:59.366060  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:59.406489  993585 cri.go:89] found id: ""
	I0120 12:34:59.406540  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.406553  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:59.406565  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:59.406579  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:59.477858  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:59.477896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:59.494617  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:59.494658  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:59.572132  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:59.572160  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:59.572178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:59.668424  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:59.668471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:02.212721  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:02.227926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:02.228019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:02.266386  993585 cri.go:89] found id: ""
	I0120 12:35:02.266431  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.266444  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:02.266454  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:02.266541  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:02.301567  993585 cri.go:89] found id: ""
	I0120 12:35:02.301595  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.301607  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:02.301615  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:02.301678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:02.338717  993585 cri.go:89] found id: ""
	I0120 12:35:02.338758  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.338770  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:02.338778  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:02.338847  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:02.373953  993585 cri.go:89] found id: ""
	I0120 12:35:02.373990  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.374004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:02.374014  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:02.374113  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:02.406791  993585 cri.go:89] found id: ""
	I0120 12:35:02.406828  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.406839  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:02.406845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:02.406897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:02.443578  993585 cri.go:89] found id: ""
	I0120 12:35:02.443609  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.443617  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:02.443626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:02.443676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:02.477334  993585 cri.go:89] found id: ""
	I0120 12:35:02.477374  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.477387  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:02.477395  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:02.477468  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:02.511320  993585 cri.go:89] found id: ""
	I0120 12:35:02.511347  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.511357  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:02.511368  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:02.511379  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:02.563616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:02.563655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:02.589388  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:02.589428  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:02.668649  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:02.668676  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:02.668690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:02.754754  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:02.754788  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:05.298701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:05.312912  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:05.312991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:05.345040  993585 cri.go:89] found id: ""
	I0120 12:35:05.345073  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.345082  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:05.345095  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:05.345166  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:05.378693  993585 cri.go:89] found id: ""
	I0120 12:35:05.378728  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.378739  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:05.378747  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:05.378802  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:05.411600  993585 cri.go:89] found id: ""
	I0120 12:35:05.411628  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.411636  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:05.411642  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:05.411693  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:05.444416  993585 cri.go:89] found id: ""
	I0120 12:35:05.444445  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.444453  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:05.444461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:05.444525  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:05.475125  993585 cri.go:89] found id: ""
	I0120 12:35:05.475158  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.475171  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:05.475177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:05.475246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:05.508163  993585 cri.go:89] found id: ""
	I0120 12:35:05.508194  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.508207  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:05.508215  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:05.508278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:05.543703  993585 cri.go:89] found id: ""
	I0120 12:35:05.543737  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.543745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:05.543751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:05.543819  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:05.579560  993585 cri.go:89] found id: ""
	I0120 12:35:05.579594  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.579606  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:05.579620  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:05.579634  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:05.632935  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:05.632986  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:05.645983  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:05.646012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:05.719551  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:05.719582  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:05.719599  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:05.799242  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:05.799283  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.344816  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:08.358927  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:08.359006  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:08.393237  993585 cri.go:89] found id: ""
	I0120 12:35:08.393265  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.393274  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:08.393280  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:08.393333  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:08.432032  993585 cri.go:89] found id: ""
	I0120 12:35:08.432061  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.432069  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:08.432077  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:08.432155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:08.465329  993585 cri.go:89] found id: ""
	I0120 12:35:08.465357  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.465366  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:08.465375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:08.465450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:08.498889  993585 cri.go:89] found id: ""
	I0120 12:35:08.498932  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.498944  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:08.498952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:08.499034  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:08.533799  993585 cri.go:89] found id: ""
	I0120 12:35:08.533827  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.533836  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:08.533842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:08.533898  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:08.569072  993585 cri.go:89] found id: ""
	I0120 12:35:08.569109  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.569121  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:08.569129  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:08.569190  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:08.602775  993585 cri.go:89] found id: ""
	I0120 12:35:08.602815  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.602828  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:08.602836  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:08.602899  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:08.637207  993585 cri.go:89] found id: ""
	I0120 12:35:08.637242  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.637253  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:08.637266  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:08.637281  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:08.650046  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:08.650077  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:08.717640  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:08.717668  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:08.717682  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:08.795565  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:08.795605  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.832910  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:08.832951  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.391198  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:11.404454  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:11.404548  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:11.438901  993585 cri.go:89] found id: ""
	I0120 12:35:11.438942  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.438951  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:11.438959  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:11.439028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:11.475199  993585 cri.go:89] found id: ""
	I0120 12:35:11.475228  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.475237  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:11.475243  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:11.475304  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:11.507984  993585 cri.go:89] found id: ""
	I0120 12:35:11.508029  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.508041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:11.508052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:11.508145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:11.544131  993585 cri.go:89] found id: ""
	I0120 12:35:11.544162  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.544170  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:11.544176  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:11.544229  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:11.585316  993585 cri.go:89] found id: ""
	I0120 12:35:11.585353  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.585364  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:11.585370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:11.585424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:11.621531  993585 cri.go:89] found id: ""
	I0120 12:35:11.621565  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.621578  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:11.621587  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:11.621644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:11.653882  993585 cri.go:89] found id: ""
	I0120 12:35:11.653915  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.653926  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:11.653935  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:11.654005  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:11.686715  993585 cri.go:89] found id: ""
	I0120 12:35:11.686751  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.686763  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:11.686777  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:11.686792  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:11.766495  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:11.766550  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:11.805907  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:11.805944  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.854399  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:11.854435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:11.867131  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:11.867168  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:11.930826  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.431154  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:14.444170  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:14.444252  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:14.478030  993585 cri.go:89] found id: ""
	I0120 12:35:14.478067  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.478077  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:14.478083  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:14.478148  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:14.510821  993585 cri.go:89] found id: ""
	I0120 12:35:14.510855  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.510867  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:14.510874  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:14.510942  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:14.543080  993585 cri.go:89] found id: ""
	I0120 12:35:14.543136  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.543149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:14.543157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:14.543214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:14.579258  993585 cri.go:89] found id: ""
	I0120 12:35:14.579293  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.579302  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:14.579308  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:14.579361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:14.617149  993585 cri.go:89] found id: ""
	I0120 12:35:14.617187  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.617198  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:14.617206  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:14.617278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:14.650716  993585 cri.go:89] found id: ""
	I0120 12:35:14.650754  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.650793  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:14.650803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:14.650874  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:14.685987  993585 cri.go:89] found id: ""
	I0120 12:35:14.686018  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.686026  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:14.686032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:14.686084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:14.736332  993585 cri.go:89] found id: ""
	I0120 12:35:14.736370  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.736378  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:14.736389  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:14.736406  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:14.789693  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:14.789734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:14.818344  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:14.818376  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:14.891944  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.891974  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:14.891990  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:14.969846  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:14.969888  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:17.512148  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:17.525055  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:17.525143  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:17.559502  993585 cri.go:89] found id: ""
	I0120 12:35:17.559539  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.559550  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:17.559563  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:17.559624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:17.596133  993585 cri.go:89] found id: ""
	I0120 12:35:17.596170  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.596182  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:17.596190  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:17.596258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:17.632458  993585 cri.go:89] found id: ""
	I0120 12:35:17.632511  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.632526  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:17.632535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:17.632614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:17.666860  993585 cri.go:89] found id: ""
	I0120 12:35:17.666891  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.666899  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:17.666905  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:17.666959  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:17.701282  993585 cri.go:89] found id: ""
	I0120 12:35:17.701309  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.701318  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:17.701325  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:17.701384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:17.733358  993585 cri.go:89] found id: ""
	I0120 12:35:17.733391  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.733399  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:17.733406  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:17.733460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:17.769630  993585 cri.go:89] found id: ""
	I0120 12:35:17.769661  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.769670  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:17.769677  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:17.769731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:17.801855  993585 cri.go:89] found id: ""
	I0120 12:35:17.801894  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.801906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:17.801920  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:17.801935  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:17.852827  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:17.852869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:17.866559  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:17.866589  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:17.937036  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:17.937058  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:17.937078  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:18.011449  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:18.011482  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.551859  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:20.564461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:20.564522  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:20.599674  993585 cri.go:89] found id: ""
	I0120 12:35:20.599700  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.599708  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:20.599713  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:20.599761  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:20.634303  993585 cri.go:89] found id: ""
	I0120 12:35:20.634330  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.634340  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:20.634347  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:20.634395  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:20.670501  993585 cri.go:89] found id: ""
	I0120 12:35:20.670552  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.670562  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:20.670568  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:20.670635  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:20.703603  993585 cri.go:89] found id: ""
	I0120 12:35:20.703627  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.703636  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:20.703644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:20.703699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:20.733456  993585 cri.go:89] found id: ""
	I0120 12:35:20.733490  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.733501  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:20.733509  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:20.733565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:20.764504  993585 cri.go:89] found id: ""
	I0120 12:35:20.764529  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.764539  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:20.764547  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:20.764608  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:20.796510  993585 cri.go:89] found id: ""
	I0120 12:35:20.796543  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.796553  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:20.796560  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:20.796623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:20.828114  993585 cri.go:89] found id: ""
	I0120 12:35:20.828147  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.828158  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:20.828170  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:20.828189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:20.889902  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:20.889933  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:20.889949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:20.962443  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:20.962471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.999767  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:20.999798  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:21.050810  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:21.050837  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.565446  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:23.577843  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:23.577912  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:23.612669  993585 cri.go:89] found id: ""
	I0120 12:35:23.612699  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.612710  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:23.612719  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:23.612787  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:23.646750  993585 cri.go:89] found id: ""
	I0120 12:35:23.646783  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.646793  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:23.646799  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:23.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:23.679879  993585 cri.go:89] found id: ""
	I0120 12:35:23.679907  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.679917  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:23.679925  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:23.679989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:23.713255  993585 cri.go:89] found id: ""
	I0120 12:35:23.713292  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.713301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:23.713307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:23.713358  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:23.742940  993585 cri.go:89] found id: ""
	I0120 12:35:23.742966  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.742974  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:23.742980  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:23.743029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:23.771816  993585 cri.go:89] found id: ""
	I0120 12:35:23.771846  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.771865  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:23.771871  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:23.771937  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:23.801508  993585 cri.go:89] found id: ""
	I0120 12:35:23.801536  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.801544  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:23.801549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:23.801606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:23.830867  993585 cri.go:89] found id: ""
	I0120 12:35:23.830897  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.830906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:23.830918  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:23.830934  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:23.882650  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:23.882678  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.895231  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:23.895260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:23.959418  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:23.959446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:23.959461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:24.036771  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:24.036802  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:26.577129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:26.594999  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:26.595084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:26.627078  993585 cri.go:89] found id: ""
	I0120 12:35:26.627114  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.627123  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:26.627129  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:26.627184  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:26.667285  993585 cri.go:89] found id: ""
	I0120 12:35:26.667317  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.667333  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:26.667340  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:26.667416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:26.704185  993585 cri.go:89] found id: ""
	I0120 12:35:26.704216  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.704227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:26.704235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:26.704296  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:26.738047  993585 cri.go:89] found id: ""
	I0120 12:35:26.738082  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.738108  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:26.738117  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:26.738183  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:26.768751  993585 cri.go:89] found id: ""
	I0120 12:35:26.768783  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.768794  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:26.768801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:26.768865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:26.799890  993585 cri.go:89] found id: ""
	I0120 12:35:26.799916  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.799924  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:26.799930  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:26.799980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:26.831879  993585 cri.go:89] found id: ""
	I0120 12:35:26.831910  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.831921  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:26.831929  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:26.831987  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:26.869231  993585 cri.go:89] found id: ""
	I0120 12:35:26.869264  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.869272  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:26.869282  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:26.869294  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:26.929958  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:26.929982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:26.929996  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:27.025154  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:27.025189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:27.073288  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:27.073333  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:27.124126  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:27.124156  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:29.638666  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:29.652209  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:29.652286  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:29.690747  993585 cri.go:89] found id: ""
	I0120 12:35:29.690777  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.690789  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:29.690796  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:29.690857  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:29.721866  993585 cri.go:89] found id: ""
	I0120 12:35:29.721896  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.721907  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:29.721915  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:29.721978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:29.757564  993585 cri.go:89] found id: ""
	I0120 12:35:29.757596  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.757628  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:29.757637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:29.757712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:29.790677  993585 cri.go:89] found id: ""
	I0120 12:35:29.790709  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.790720  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:29.790728  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:29.790791  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:29.826917  993585 cri.go:89] found id: ""
	I0120 12:35:29.826953  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.826965  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:29.826974  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:29.827039  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:29.861866  993585 cri.go:89] found id: ""
	I0120 12:35:29.861897  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.861908  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:29.861916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:29.861973  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:29.895508  993585 cri.go:89] found id: ""
	I0120 12:35:29.895543  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.895554  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:29.895563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:29.895623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:29.927907  993585 cri.go:89] found id: ""
	I0120 12:35:29.927939  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.927949  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:29.927961  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:29.927976  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:29.968111  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:29.968149  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:30.038475  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:30.038529  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:30.051650  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:30.051679  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:30.117850  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:30.117880  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:30.117896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:32.712573  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:32.725809  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:32.725886  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:32.761768  993585 cri.go:89] found id: ""
	I0120 12:35:32.761803  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.761812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:32.761818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:32.761875  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:32.797578  993585 cri.go:89] found id: ""
	I0120 12:35:32.797610  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.797621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:32.797628  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:32.797694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:32.834493  993585 cri.go:89] found id: ""
	I0120 12:35:32.834539  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.834552  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:32.834559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:32.834644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:32.870730  993585 cri.go:89] found id: ""
	I0120 12:35:32.870762  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.870774  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:32.870782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:32.870851  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:32.913904  993585 cri.go:89] found id: ""
	I0120 12:35:32.913932  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.913943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:32.913951  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:32.914019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:32.955928  993585 cri.go:89] found id: ""
	I0120 12:35:32.955961  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.955972  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:32.955981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:32.956044  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:33.001075  993585 cri.go:89] found id: ""
	I0120 12:35:33.001116  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.001129  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:33.001138  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:33.001209  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:33.035918  993585 cri.go:89] found id: ""
	I0120 12:35:33.035954  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.035961  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:33.035971  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:33.035981  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:33.090782  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:33.090816  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:33.107144  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:33.107171  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:33.184808  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:33.184830  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:33.184845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:33.269131  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:33.269170  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:35.809619  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:35.822178  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:35.822254  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:35.862005  993585 cri.go:89] found id: ""
	I0120 12:35:35.862035  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.862042  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:35.862050  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:35.862110  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:35.896880  993585 cri.go:89] found id: ""
	I0120 12:35:35.896909  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.896920  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:35.896928  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:35.896995  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:35.931762  993585 cri.go:89] found id: ""
	I0120 12:35:35.931795  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.931806  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:35.931815  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:35.931882  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:35.965205  993585 cri.go:89] found id: ""
	I0120 12:35:35.965236  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.965246  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:35.965254  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:35.965310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:35.999903  993585 cri.go:89] found id: ""
	I0120 12:35:35.999926  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.999943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:35.999956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:36.000004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:36.033944  993585 cri.go:89] found id: ""
	I0120 12:35:36.033981  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.033992  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:36.034005  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:36.034073  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:36.066986  993585 cri.go:89] found id: ""
	I0120 12:35:36.067021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.067035  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:36.067043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:36.067108  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:36.096989  993585 cri.go:89] found id: ""
	I0120 12:35:36.097021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.097033  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:36.097047  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:36.097062  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:36.170812  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:36.170838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:36.208578  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:36.208619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:36.259448  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:36.259483  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:36.273938  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:36.273968  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:36.342621  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:38.843738  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:38.856444  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:38.856506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:38.892000  993585 cri.go:89] found id: ""
	I0120 12:35:38.892027  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.892037  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:38.892043  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:38.892093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:38.930509  993585 cri.go:89] found id: ""
	I0120 12:35:38.930558  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.930569  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:38.930577  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:38.930643  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:38.976632  993585 cri.go:89] found id: ""
	I0120 12:35:38.976675  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.976687  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:38.976695  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:38.976763  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:39.021957  993585 cri.go:89] found id: ""
	I0120 12:35:39.021993  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.022004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:39.022011  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:39.022080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:39.060311  993585 cri.go:89] found id: ""
	I0120 12:35:39.060352  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.060366  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:39.060375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:39.060441  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:39.097901  993585 cri.go:89] found id: ""
	I0120 12:35:39.097939  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.097952  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:39.097961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:39.098029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:39.135291  993585 cri.go:89] found id: ""
	I0120 12:35:39.135328  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.135341  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:39.135349  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:39.135415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:39.178737  993585 cri.go:89] found id: ""
	I0120 12:35:39.178775  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.178810  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:39.178822  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:39.178838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:39.228677  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:39.228723  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:39.281237  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:39.281274  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:39.298505  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:39.298554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:39.387325  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:39.387350  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:39.387364  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:41.981886  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:41.996139  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:41.996203  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:42.028240  993585 cri.go:89] found id: ""
	I0120 12:35:42.028267  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.028279  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:42.028287  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:42.028351  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:42.063513  993585 cri.go:89] found id: ""
	I0120 12:35:42.063544  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.063553  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:42.063561  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:42.063622  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:42.095602  993585 cri.go:89] found id: ""
	I0120 12:35:42.095637  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.095648  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:42.095656  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:42.095712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:42.128427  993585 cri.go:89] found id: ""
	I0120 12:35:42.128460  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.128471  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:42.128478  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:42.128539  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:42.163430  993585 cri.go:89] found id: ""
	I0120 12:35:42.163462  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.163473  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:42.163487  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:42.163601  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:42.212225  993585 cri.go:89] found id: ""
	I0120 12:35:42.212251  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.212259  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:42.212265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:42.212326  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:42.251596  993585 cri.go:89] found id: ""
	I0120 12:35:42.251623  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.251631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:42.251637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:42.251697  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:42.288436  993585 cri.go:89] found id: ""
	I0120 12:35:42.288472  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.288485  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:42.288498  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:42.288514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:42.351809  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:42.351858  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:42.367697  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:42.367740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:42.445420  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:42.445452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:42.445470  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:42.529150  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:42.529190  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:45.068423  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:45.083648  993585 kubeadm.go:597] duration metric: took 4m4.248047549s to restartPrimaryControlPlane
	W0120 12:35:45.083733  993585 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:35:45.083773  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:35:48.615167  993585 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.531361181s)
	I0120 12:35:48.615262  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:48.629340  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:48.640853  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:48.653161  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:48.653181  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:48.653220  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:48.662422  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:48.662489  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:48.672006  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:48.681430  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:48.681493  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:48.690703  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.699479  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:48.699551  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.708576  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:48.717379  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:48.717440  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:48.727690  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:48.809089  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:35:48.809181  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:48.968180  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:48.968344  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:48.968503  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:35:49.164019  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:49.166637  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:49.166743  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:49.166851  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:49.166969  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:49.167055  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:49.167163  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:49.167247  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:49.167333  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:49.167596  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:49.167953  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:49.168592  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:49.168717  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:49.168824  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:49.305660  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:49.652487  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:49.782615  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:49.921695  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:49.937706  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:49.939001  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:49.939074  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:50.070984  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:50.072848  993585 out.go:235]   - Booting up control plane ...
	I0120 12:35:50.072980  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:50.082351  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:50.082939  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:50.083932  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:50.088842  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:36:30.091045  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:36:30.091553  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:30.091777  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:35.092197  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:35.092442  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:45.093033  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:45.093302  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:05.094270  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:05.094487  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096146  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:45.096378  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096414  993585 kubeadm.go:310] 
	I0120 12:37:45.096477  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:37:45.096535  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:37:45.096547  993585 kubeadm.go:310] 
	I0120 12:37:45.096623  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:37:45.096688  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:37:45.096836  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:37:45.096847  993585 kubeadm.go:310] 
	I0120 12:37:45.096982  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:37:45.097022  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:37:45.097075  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:37:45.097088  993585 kubeadm.go:310] 
	I0120 12:37:45.097213  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:37:45.097323  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:37:45.097344  993585 kubeadm.go:310] 
	I0120 12:37:45.097440  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:37:45.097575  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:37:45.097684  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:37:45.097786  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:37:45.097798  993585 kubeadm.go:310] 
	I0120 12:37:45.098707  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:37:45.098836  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:37:45.098939  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:37:45.099133  993585 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:37:45.099186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:37:45.553353  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:37:45.568252  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:37:45.577030  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:37:45.577047  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:37:45.577084  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:37:45.585663  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:37:45.585715  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:37:45.594051  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:37:45.602109  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:37:45.602159  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:37:45.610431  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.619241  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:37:45.619279  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.627467  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:37:45.636457  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:37:45.636508  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:37:45.644627  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:37:45.711254  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:37:45.711363  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:37:45.852391  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:37:45.852543  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:37:45.852693  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:37:46.034483  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:37:46.036223  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:37:46.036346  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:37:46.036455  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:37:46.036570  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:37:46.036663  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:37:46.036789  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:37:46.036889  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:37:46.037251  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:37:46.037740  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:37:46.038025  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:37:46.038414  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:37:46.038478  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:37:46.038581  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:37:46.266444  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:37:46.393858  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:37:46.536948  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:37:46.765338  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:37:46.783975  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:37:46.785028  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:37:46.785076  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:37:46.920894  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:37:46.922757  993585 out.go:235]   - Booting up control plane ...
	I0120 12:37:46.922892  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:37:46.929056  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:37:46.933400  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:37:46.933527  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:37:46.939663  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:38:26.942147  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:38:26.942793  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:26.943016  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:31.943340  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:31.943563  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:41.944064  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:41.944316  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:01.944375  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:01.944608  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943032  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:41.943264  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943273  993585 kubeadm.go:310] 
	I0120 12:39:41.943326  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:39:41.943363  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:39:41.943383  993585 kubeadm.go:310] 
	I0120 12:39:41.943444  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:39:41.943506  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:39:41.943609  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:39:41.943617  993585 kubeadm.go:310] 
	I0120 12:39:41.943716  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:39:41.943762  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:39:41.943814  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:39:41.943826  993585 kubeadm.go:310] 
	I0120 12:39:41.943914  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:39:41.944033  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:39:41.944052  993585 kubeadm.go:310] 
	I0120 12:39:41.944219  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:39:41.944348  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:39:41.944450  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:39:41.944591  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:39:41.944613  993585 kubeadm.go:310] 
	I0120 12:39:41.945529  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:39:41.945621  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:39:41.945690  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:39:41.945758  993585 kubeadm.go:394] duration metric: took 8m1.157734369s to StartCluster
	I0120 12:39:41.945816  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:39:41.945871  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:39:41.989147  993585 cri.go:89] found id: ""
	I0120 12:39:41.989175  993585 logs.go:282] 0 containers: []
	W0120 12:39:41.989183  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:39:41.989188  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:39:41.989251  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:39:42.021608  993585 cri.go:89] found id: ""
	I0120 12:39:42.021631  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.021639  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:39:42.021646  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:39:42.021706  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:39:42.062565  993585 cri.go:89] found id: ""
	I0120 12:39:42.062592  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.062601  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:39:42.062607  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:39:42.062659  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:39:42.097040  993585 cri.go:89] found id: ""
	I0120 12:39:42.097067  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.097075  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:39:42.097081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:39:42.097144  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:39:42.128833  993585 cri.go:89] found id: ""
	I0120 12:39:42.128862  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.128873  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:39:42.128880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:39:42.128936  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:39:42.159564  993585 cri.go:89] found id: ""
	I0120 12:39:42.159596  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.159608  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:39:42.159616  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:39:42.159676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:39:42.189336  993585 cri.go:89] found id: ""
	I0120 12:39:42.189367  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.189378  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:39:42.189386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:39:42.189450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:39:42.228745  993585 cri.go:89] found id: ""
	I0120 12:39:42.228776  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.228787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:39:42.228801  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:39:42.228818  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:39:42.244466  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:39:42.244508  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:39:42.336809  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:39:42.336832  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:39:42.336844  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:39:42.443413  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:39:42.443445  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:39:42.481436  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:39:42.481466  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:39:42.533396  993585 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:39:42.533472  993585 out.go:270] * 
	* 
	W0120 12:39:42.533585  993585 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.533610  993585 out.go:270] * 
	* 
	W0120 12:39:42.534617  993585 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:39:42.537661  993585 out.go:201] 
	W0120 12:39:42.538809  993585 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.538865  993585 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:39:42.538897  993585 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:39:42.540269  993585 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-134433 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (246.883946ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25: (1.021801192s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-496524             | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-969801 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | disable-driver-mounts-969801                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:28 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-987349            | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-496524                  | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981597  | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:30 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-987349                 | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC | 20 Jan 25 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-134433        | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981597       | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC | 20 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC |                     |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-134433             | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:31:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:31:11.956010  993585 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:31:11.956137  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956148  993585 out.go:358] Setting ErrFile to fd 2...
	I0120 12:31:11.956152  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956366  993585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:31:11.956993  993585 out.go:352] Setting JSON to false
	I0120 12:31:11.958067  993585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18815,"bootTime":1737357457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:31:11.958186  993585 start.go:139] virtualization: kvm guest
	I0120 12:31:11.960398  993585 out.go:177] * [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:31:11.961613  993585 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:31:11.961713  993585 notify.go:220] Checking for updates...
	I0120 12:31:11.964011  993585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:31:11.965092  993585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:11.966144  993585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:31:11.967208  993585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:31:11.968350  993585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:31:11.969863  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:11.970277  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.970346  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:11.985419  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0120 12:31:11.985879  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:11.986551  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:11.986596  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:11.986957  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:11.987146  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:11.988784  993585 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:31:11.989825  993585 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:31:11.990150  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.990189  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.004831  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0120 12:31:12.005226  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.005709  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.005734  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.006077  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.006313  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.043016  993585 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:31:12.044104  993585 start.go:297] selected driver: kvm2
	I0120 12:31:12.044121  993585 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
34433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.044209  993585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:31:12.044916  993585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.045000  993585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:31:12.060200  993585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:31:12.060534  993585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:31:12.060567  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:12.060601  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:12.060657  993585 start.go:340] cluster config:
	{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.060783  993585 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.062963  993585 out.go:177] * Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	I0120 12:31:12.064143  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:12.064184  993585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:31:12.064195  993585 cache.go:56] Caching tarball of preloaded images
	I0120 12:31:12.064275  993585 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:31:12.064287  993585 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:31:12.064378  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:12.064565  993585 start.go:360] acquireMachinesLock for old-k8s-version-134433: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:31:12.064608  993585 start.go:364] duration metric: took 25.197µs to acquireMachinesLock for "old-k8s-version-134433"
	I0120 12:31:12.064624  993585 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:31:12.064632  993585 fix.go:54] fixHost starting: 
	I0120 12:31:12.064897  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:12.064947  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.079979  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0120 12:31:12.080385  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.080944  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.080969  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.081279  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.081512  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.081673  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetState
	I0120 12:31:12.083222  993585 fix.go:112] recreateIfNeeded on old-k8s-version-134433: state=Stopped err=<nil>
	I0120 12:31:12.083247  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	W0120 12:31:12.083395  993585 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:31:12.084950  993585 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-134433" ...
	I0120 12:31:07.641120  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.142764  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.684376  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.684889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:11.967640  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:13.968387  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.086040  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .Start
	I0120 12:31:12.086250  993585 main.go:141] libmachine: (old-k8s-version-134433) starting domain...
	I0120 12:31:12.086274  993585 main.go:141] libmachine: (old-k8s-version-134433) ensuring networks are active...
	I0120 12:31:12.087116  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network default is active
	I0120 12:31:12.087507  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network mk-old-k8s-version-134433 is active
	I0120 12:31:12.087972  993585 main.go:141] libmachine: (old-k8s-version-134433) getting domain XML...
	I0120 12:31:12.088701  993585 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:31:13.353235  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for IP...
	I0120 12:31:13.354008  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.354424  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.354568  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.354436  993621 retry.go:31] will retry after 195.738853ms: waiting for domain to come up
	I0120 12:31:13.551979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.552485  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.552546  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.552470  993621 retry.go:31] will retry after 286.807934ms: waiting for domain to come up
	I0120 12:31:13.841028  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.841561  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.841601  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.841522  993621 retry.go:31] will retry after 438.177816ms: waiting for domain to come up
	I0120 12:31:14.280867  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.281254  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.281287  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.281212  993621 retry.go:31] will retry after 401.413585ms: waiting for domain to come up
	I0120 12:31:14.684677  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.685256  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.685288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.685176  993621 retry.go:31] will retry after 625.770313ms: waiting for domain to come up
	I0120 12:31:15.312721  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:15.313245  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:15.313281  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:15.313210  993621 retry.go:31] will retry after 842.789855ms: waiting for domain to come up
	I0120 12:31:16.157329  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:16.157939  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:16.157970  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:16.157917  993621 retry.go:31] will retry after 997.649049ms: waiting for domain to come up
	I0120 12:31:12.642593  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:15.141471  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.141620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:14.686169  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.184821  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:16.467025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:18.966945  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.157668  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:17.158288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:17.158346  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:17.158266  993621 retry.go:31] will retry after 1.3317802s: waiting for domain to come up
	I0120 12:31:18.491767  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:18.492314  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:18.492345  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:18.492274  993621 retry.go:31] will retry after 1.684115629s: waiting for domain to come up
	I0120 12:31:20.177742  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:20.178312  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:20.178344  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:20.178272  993621 retry.go:31] will retry after 2.098717757s: waiting for domain to come up
	I0120 12:31:19.141727  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.142012  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.684947  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.686415  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:24.185262  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:20.969393  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.466563  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.468388  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:22.279263  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:22.279782  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:22.279815  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:22.279747  993621 retry.go:31] will retry after 2.908067158s: waiting for domain to come up
	I0120 12:31:25.191591  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:25.192058  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:25.192082  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:25.192027  993621 retry.go:31] will retry after 2.860704715s: waiting for domain to come up
	I0120 12:31:23.142601  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.641748  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:26.685300  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:29.186578  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:27.967731  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.467076  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:28.053824  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:28.054209  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:28.054237  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:28.054168  993621 retry.go:31] will retry after 3.593877393s: waiting for domain to come up
	I0120 12:31:31.651977  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652456  993585 main.go:141] libmachine: (old-k8s-version-134433) found domain IP: 192.168.50.250
	I0120 12:31:31.652477  993585 main.go:141] libmachine: (old-k8s-version-134433) reserving static IP address...
	I0120 12:31:31.652499  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has current primary IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652880  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.652910  993585 main.go:141] libmachine: (old-k8s-version-134433) reserved static IP address 192.168.50.250 for domain old-k8s-version-134433
	I0120 12:31:31.652928  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | skip adding static IP to network mk-old-k8s-version-134433 - found existing host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"}
	I0120 12:31:31.652949  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for SSH...
	I0120 12:31:31.652979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Getting to WaitForSSH function...
	I0120 12:31:31.655045  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655323  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.655341  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655472  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH client type: external
	I0120 12:31:31.655509  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa (-rw-------)
	I0120 12:31:31.655555  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:31:31.655574  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | About to run SSH command:
	I0120 12:31:31.655599  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | exit 0
	I0120 12:31:31.778333  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | SSH cmd err, output: <nil>: 
	I0120 12:31:31.778766  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:31:31.779451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:31.782111  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782481  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.782538  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782728  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:31.782983  993585 machine.go:93] provisionDockerMachine start ...
	I0120 12:31:31.783008  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:31.783221  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.785482  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785771  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.785804  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785958  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.786153  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786352  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786496  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.786666  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.786905  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.786918  993585 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:31:31.886822  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:31:31.886860  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887127  993585 buildroot.go:166] provisioning hostname "old-k8s-version-134433"
	I0120 12:31:31.887156  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887366  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.890506  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.890962  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.891053  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.891155  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.891355  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891522  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.891900  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.892067  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.892078  993585 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-134433 && echo "old-k8s-version-134433" | sudo tee /etc/hostname
	I0120 12:31:27.642107  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.141452  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.142854  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.007463  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-134433
	
	I0120 12:31:32.007490  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.010730  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011157  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.011184  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011407  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.011597  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011774  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011883  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.012032  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.012246  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.012275  993585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-134433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-134433/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-134433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:31:32.122811  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:31:32.122845  993585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:31:32.122865  993585 buildroot.go:174] setting up certificates
	I0120 12:31:32.122875  993585 provision.go:84] configureAuth start
	I0120 12:31:32.122884  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:32.123125  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.125986  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126423  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.126446  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126677  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.128626  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129281  993585 provision.go:143] copyHostCerts
	I0120 12:31:32.129354  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:31:32.129380  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:31:32.129382  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.129411  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129470  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:31:32.129581  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:31:32.129592  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:31:32.129634  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:31:32.129702  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:31:32.129712  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:31:32.129741  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:31:32.129806  993585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-134433 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-134433]
	I0120 12:31:32.226358  993585 provision.go:177] copyRemoteCerts
	I0120 12:31:32.226410  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:31:32.226432  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.228814  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229133  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.229168  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229333  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.229548  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.229722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.229881  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.315787  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:31:32.341389  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:31:32.364095  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:31:32.386543  993585 provision.go:87] duration metric: took 263.65519ms to configureAuth
	I0120 12:31:32.386572  993585 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:31:32.386750  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:32.386844  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.389737  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390222  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.390257  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390478  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.390683  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.390858  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.391063  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.391234  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.391417  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.391438  993585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:31:32.617034  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:31:32.617072  993585 machine.go:96] duration metric: took 834.071068ms to provisionDockerMachine
	I0120 12:31:32.617085  993585 start.go:293] postStartSetup for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:31:32.617096  993585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:31:32.617121  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.617506  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:31:32.617547  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.620838  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621275  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.621310  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621640  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.621865  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.622064  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.622248  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.703904  993585 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:31:32.707878  993585 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:31:32.707902  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:31:32.707970  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:31:32.708078  993585 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:31:32.708218  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:31:32.716746  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:32.739636  993585 start.go:296] duration metric: took 122.539492ms for postStartSetup
	I0120 12:31:32.739674  993585 fix.go:56] duration metric: took 20.675041615s for fixHost
	I0120 12:31:32.739700  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.742857  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743259  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.743291  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.743616  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743807  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743953  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.744112  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.744267  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.744277  993585 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:31:32.850613  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376292.825194263
	
	I0120 12:31:32.850655  993585 fix.go:216] guest clock: 1737376292.825194263
	I0120 12:31:32.850667  993585 fix.go:229] Guest: 2025-01-20 12:31:32.825194263 +0000 UTC Remote: 2025-01-20 12:31:32.739679914 +0000 UTC m=+20.823511960 (delta=85.514349ms)
	I0120 12:31:32.850692  993585 fix.go:200] guest clock delta is within tolerance: 85.514349ms
	I0120 12:31:32.850697  993585 start.go:83] releasing machines lock for "old-k8s-version-134433", held for 20.786078788s
	I0120 12:31:32.850723  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.850994  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.853508  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.853864  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.853895  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.854081  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854574  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854785  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854878  993585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:31:32.854915  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.855040  993585 ssh_runner.go:195] Run: cat /version.json
	I0120 12:31:32.855073  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.857825  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858071  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858242  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858273  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858472  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858613  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858642  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858678  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.858803  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858907  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.858970  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.859042  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.859089  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.859218  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.963636  993585 ssh_runner.go:195] Run: systemctl --version
	I0120 12:31:32.969637  993585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:31:33.109368  993585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:31:33.116476  993585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:31:33.116551  993585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:31:33.132563  993585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:31:33.132586  993585 start.go:495] detecting cgroup driver to use...
	I0120 12:31:33.132666  993585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:31:33.149598  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:31:33.163579  993585 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:31:33.163644  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:31:33.176714  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:31:33.190002  993585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:31:33.317215  993585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:31:33.474712  993585 docker.go:233] disabling docker service ...
	I0120 12:31:33.474786  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:31:33.487733  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:31:33.500315  993585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:31:33.629138  993585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:31:33.765704  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:31:33.780662  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:31:33.799085  993585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:31:33.799155  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.808607  993585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:31:33.808659  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.818065  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.827515  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.837226  993585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:31:33.846616  993585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:31:33.855024  993585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:31:33.855077  993585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:31:33.867670  993585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:31:33.876402  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:34.006664  993585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:31:34.098750  993585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:31:34.098834  993585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:31:34.103642  993585 start.go:563] Will wait 60s for crictl version
	I0120 12:31:34.103699  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:34.107125  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:31:34.144190  993585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:31:34.144288  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.172817  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.203224  993585 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:31:31.684648  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.685881  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.467705  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.470006  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.204485  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:34.207458  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.207876  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:34.207904  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.208137  993585 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:31:34.211891  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:34.223705  993585 kubeadm.go:883] updating cluster {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:31:34.223826  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:34.223864  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:34.268289  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:34.268365  993585 ssh_runner.go:195] Run: which lz4
	I0120 12:31:34.272014  993585 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:31:34.275957  993585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:31:34.275987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:31:35.756157  993585 crio.go:462] duration metric: took 1.484200004s to copy over tarball
	I0120 12:31:35.756230  993585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:31:34.642634  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:37.142882  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:35.687588  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.185847  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:36.967824  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.968146  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.594323  993585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838057752s)
	I0120 12:31:38.594429  993585 crio.go:469] duration metric: took 2.838184511s to extract the tarball
	I0120 12:31:38.594454  993585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:31:38.636288  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:38.673987  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:38.674016  993585 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:31:38.674097  993585 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.674135  993585 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:31:38.674145  993585 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.674178  993585 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.674112  993585 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.674208  993585 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.674120  993585 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.674479  993585 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675856  993585 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.675888  993585 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.675858  993585 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.675860  993585 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.891668  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:31:38.898693  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.901324  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.903830  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.907827  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.909691  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.911977  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.988279  993585 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:31:38.988332  993585 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:31:38.988388  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.039162  993585 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:31:39.039204  993585 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.039255  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.070879  993585 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:31:39.070922  993585 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.070974  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078869  993585 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:31:39.078897  993585 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:31:39.078910  993585 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.078930  993585 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.078948  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078955  993585 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:31:39.078982  993585 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.078982  993585 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:31:39.079004  993585 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.079014  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078986  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079039  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079028  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.079059  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.081555  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.083015  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.130647  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.130694  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.186867  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.186961  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.186966  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.209991  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.210008  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.246249  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.246259  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.321520  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.321580  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.336397  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.361423  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.361625  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.382747  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:31:39.382804  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.434483  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.434505  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.494972  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:31:39.495045  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:31:39.520487  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:31:39.520534  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:31:39.529832  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:31:39.530428  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:31:39.865446  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:40.001428  993585 cache_images.go:92] duration metric: took 1.327395723s to LoadCachedImages
	W0120 12:31:40.001521  993585 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0120 12:31:40.001540  993585 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I0120 12:31:40.001666  993585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-134433 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:31:40.001759  993585 ssh_runner.go:195] Run: crio config
	I0120 12:31:40.049768  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:40.049788  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:40.049798  993585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:31:40.049817  993585 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-134433 NodeName:old-k8s-version-134433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:31:40.049953  993585 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-134433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:31:40.050035  993585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:31:40.060513  993585 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:31:40.060576  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:31:40.070416  993585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 12:31:40.086321  993585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:31:40.101428  993585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:31:40.118688  993585 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0120 12:31:40.122319  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:40.133757  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:40.267585  993585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:31:40.285307  993585 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433 for IP: 192.168.50.250
	I0120 12:31:40.285334  993585 certs.go:194] generating shared ca certs ...
	I0120 12:31:40.285359  993585 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.285629  993585 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:31:40.285712  993585 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:31:40.285729  993585 certs.go:256] generating profile certs ...
	I0120 12:31:40.285868  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key
	I0120 12:31:40.320727  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93
	I0120 12:31:40.320836  993585 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key
	I0120 12:31:40.321012  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:31:40.321045  993585 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:31:40.321055  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:31:40.321077  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:31:40.321112  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:31:40.321133  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:31:40.321173  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:40.321820  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:31:40.355849  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:31:40.384987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:31:40.412042  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:31:40.443057  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:31:40.487592  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:31:40.524256  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:31:40.548205  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:31:40.570407  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:31:40.594640  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:31:40.617736  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:31:40.642388  993585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:31:40.658180  993585 ssh_runner.go:195] Run: openssl version
	I0120 12:31:40.663613  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:31:40.673079  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677607  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677688  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.684863  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:31:40.694838  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:31:40.704251  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708616  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708671  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.714178  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:31:40.723770  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:31:40.733248  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737473  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737526  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.742896  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:31:40.752426  993585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:31:40.756579  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:31:40.761769  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:31:40.766935  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:31:40.772427  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:31:40.777720  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:31:40.782945  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:31:40.788029  993585 kubeadm.go:392] StartCluster: {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:40.788161  993585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:31:40.788202  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.825500  993585 cri.go:89] found id: ""
	I0120 12:31:40.825563  993585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:31:40.835567  993585 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:31:40.835588  993585 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:31:40.835635  993585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:31:40.845152  993585 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:31:40.845853  993585 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:40.846275  993585 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-134433" cluster setting kubeconfig missing "old-k8s-version-134433" context setting]
	I0120 12:31:40.846897  993585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.937033  993585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:31:40.947319  993585 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I0120 12:31:40.947380  993585 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:31:40.947395  993585 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:31:40.947453  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.984392  993585 cri.go:89] found id: ""
	I0120 12:31:40.984458  993585 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:31:41.001578  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:31:41.011794  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:31:41.011819  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:31:41.011875  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:31:41.021463  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:31:41.021518  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:31:41.030836  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:31:41.040645  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:31:41.040698  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:31:41.049821  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.058040  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:31:41.058097  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.066553  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:31:41.075225  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:31:41.075281  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:31:41.084906  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:31:41.093515  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.210064  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.666359  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.900869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:39.144316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.165382  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.817405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.185212  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.468125  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.966550  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:42.000812  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:42.089692  993585 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:31:42.089772  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:42.590338  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.090787  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.590769  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.090319  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.590108  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.089838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.590766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.089997  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.590717  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.642362  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:46.140694  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.684419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:48.185535  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.967037  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.967799  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.468120  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.090580  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:47.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.090251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.589947  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.090785  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.590768  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.090614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.590558  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.090311  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.590228  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.141706  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.641289  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.684323  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.684538  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.968580  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:55.466922  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.090647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.090104  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.590691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.090868  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.590219  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.090350  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.590003  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.090726  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.590283  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.641982  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.643173  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.142153  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.685013  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.186057  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.967658  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.968521  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.089873  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:57.590850  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.090780  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.590614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.090635  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.590451  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.090701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.590640  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.090753  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.590644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.640970  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.641596  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.684870  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.685889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.185105  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.466874  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.467851  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.089853  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:02.590807  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.089981  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.590808  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.090857  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.590757  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.089933  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.590271  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.090623  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.590064  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.644442  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.140708  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.185872  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.683979  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.468061  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.966912  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:07.090783  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:07.589932  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.090055  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.590241  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.089915  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.590298  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.089954  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.590262  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.090497  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.142135  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.142823  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.685405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.184959  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:11.467184  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.966687  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:12.090562  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.590135  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.090747  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.590675  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.089959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.090313  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.590672  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.090234  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.590838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.641948  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.141465  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.685252  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.685468  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.968298  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:18.466913  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.589874  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.089914  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.589959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.090841  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.590272  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.090818  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.590893  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.590656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.641252  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:19.642645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.140826  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.184125  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.184670  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.184995  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.967285  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.967592  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:25.467420  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.090802  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:22.589928  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.090636  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.590707  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.090639  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.590650  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.089995  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.590660  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.090132  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.590033  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.141192  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.641799  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.684732  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.185287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.467860  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.967353  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.090577  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:27.590867  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.090984  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.590845  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.090300  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.590066  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.090684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.590040  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.090303  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.590795  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.642020  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.141741  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.685583  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.184568  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.967618  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.468025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:32.090206  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:32.590714  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.090718  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.590378  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.090656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.590435  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.090317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.590516  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.090582  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.142049  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.142316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.185027  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.684930  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.967096  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:39.467542  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:37.090078  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.590663  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.090428  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.089913  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.590888  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.090661  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.590041  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.090883  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.590739  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.641649  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.140763  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.141742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.686049  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:43.188216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:41.966891  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:44.467792  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.090408  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.090485  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.129790  993585 cri.go:89] found id: ""
	I0120 12:32:42.129819  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.129826  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:42.129832  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.129887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.160523  993585 cri.go:89] found id: ""
	I0120 12:32:42.160546  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.160555  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:42.160560  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.160606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.194768  993585 cri.go:89] found id: ""
	I0120 12:32:42.194796  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.194803  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:42.194808  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.194878  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.226406  993585 cri.go:89] found id: ""
	I0120 12:32:42.226435  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.226443  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:42.226448  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.226497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:42.263295  993585 cri.go:89] found id: ""
	I0120 12:32:42.263328  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.263352  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:42.263362  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:42.263419  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:42.293754  993585 cri.go:89] found id: ""
	I0120 12:32:42.293784  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.293794  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:42.293803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:42.293866  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:42.327600  993585 cri.go:89] found id: ""
	I0120 12:32:42.327631  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.327642  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:42.327650  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:42.327702  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:42.356668  993585 cri.go:89] found id: ""
	I0120 12:32:42.356698  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.356710  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:42.356722  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:42.356734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:42.405030  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:42.405063  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:42.417663  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:42.417690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:42.538067  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:42.538100  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:42.538122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:42.607706  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:42.607743  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:45.149684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:45.161947  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:45.162031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:45.204014  993585 cri.go:89] found id: ""
	I0120 12:32:45.204049  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.204060  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:45.204068  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:45.204129  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:45.245164  993585 cri.go:89] found id: ""
	I0120 12:32:45.245196  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.245206  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:45.245214  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:45.245278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:45.285368  993585 cri.go:89] found id: ""
	I0120 12:32:45.285401  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.285412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:45.285420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:45.285482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:45.322496  993585 cri.go:89] found id: ""
	I0120 12:32:45.322551  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.322564  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:45.322573  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:45.322632  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:45.353693  993585 cri.go:89] found id: ""
	I0120 12:32:45.353723  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.353731  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:45.353737  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:45.353786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:45.385705  993585 cri.go:89] found id: ""
	I0120 12:32:45.385735  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.385744  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:45.385750  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:45.385800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:45.419199  993585 cri.go:89] found id: ""
	I0120 12:32:45.419233  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.419243  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:45.419251  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:45.419317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:45.453757  993585 cri.go:89] found id: ""
	I0120 12:32:45.453789  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.453800  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:45.453813  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:45.453828  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:45.502873  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:45.502902  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:45.515215  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:45.515240  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:45.581415  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:45.581443  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:45.581458  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:45.665418  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:45.665450  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:44.641564  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.642075  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:45.685384  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.184725  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.967382  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.971509  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.203193  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:48.215966  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:48.216028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:48.247173  993585 cri.go:89] found id: ""
	I0120 12:32:48.247201  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.247212  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:48.247219  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:48.247280  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:48.279393  993585 cri.go:89] found id: ""
	I0120 12:32:48.279421  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.279428  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:48.279434  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:48.279488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:48.310392  993585 cri.go:89] found id: ""
	I0120 12:32:48.310416  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.310423  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:48.310429  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:48.310473  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:48.342762  993585 cri.go:89] found id: ""
	I0120 12:32:48.342794  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.342803  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:48.342811  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:48.342869  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:48.373905  993585 cri.go:89] found id: ""
	I0120 12:32:48.373931  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.373942  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:48.373952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:48.374016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:48.406406  993585 cri.go:89] found id: ""
	I0120 12:32:48.406435  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.406443  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:48.406449  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:48.406494  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:48.442695  993585 cri.go:89] found id: ""
	I0120 12:32:48.442728  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.442738  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:48.442746  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:48.442813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:48.474459  993585 cri.go:89] found id: ""
	I0120 12:32:48.474485  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.474494  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:48.474506  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:48.474535  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:48.522305  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:48.522337  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:48.535295  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:48.535322  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:48.605460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.605493  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:48.605510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:48.689980  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:48.690012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.228008  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:51.240647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:51.240708  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:51.274219  993585 cri.go:89] found id: ""
	I0120 12:32:51.274255  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.274267  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:51.274275  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:51.274347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:51.307904  993585 cri.go:89] found id: ""
	I0120 12:32:51.307930  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.307939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:51.307948  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:51.308000  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:51.342253  993585 cri.go:89] found id: ""
	I0120 12:32:51.342280  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.342288  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:51.342294  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:51.342340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:51.372185  993585 cri.go:89] found id: ""
	I0120 12:32:51.372211  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.372218  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:51.372224  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:51.372268  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:51.402807  993585 cri.go:89] found id: ""
	I0120 12:32:51.402840  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.402851  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:51.402858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:51.402932  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:51.434101  993585 cri.go:89] found id: ""
	I0120 12:32:51.434129  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.434139  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:51.434147  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:51.434217  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:51.467394  993585 cri.go:89] found id: ""
	I0120 12:32:51.467422  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.467431  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:51.467438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:51.467505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:51.498551  993585 cri.go:89] found id: ""
	I0120 12:32:51.498582  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.498592  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:51.498604  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:51.498619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:51.577501  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:51.577533  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.618784  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:51.618825  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:51.671630  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:51.671667  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:51.685726  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:51.685750  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:51.751392  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.642162  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.142915  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:50.685157  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.185189  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.468237  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.967177  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:54.251524  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.265218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.265281  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.299773  993585 cri.go:89] found id: ""
	I0120 12:32:54.299804  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.299813  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:54.299820  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.299867  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.330432  993585 cri.go:89] found id: ""
	I0120 12:32:54.330461  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.330471  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:54.330479  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.330565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.366364  993585 cri.go:89] found id: ""
	I0120 12:32:54.366400  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.366412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:54.366420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:54.366480  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:54.398373  993585 cri.go:89] found id: ""
	I0120 12:32:54.398407  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.398417  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:54.398425  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:54.398486  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:54.437033  993585 cri.go:89] found id: ""
	I0120 12:32:54.437064  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.437074  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:54.437081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:54.437141  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:54.475179  993585 cri.go:89] found id: ""
	I0120 12:32:54.475203  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.475211  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:54.475218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:54.475276  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:54.507372  993585 cri.go:89] found id: ""
	I0120 12:32:54.507410  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.507420  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:54.507428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:54.507484  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:54.538317  993585 cri.go:89] found id: ""
	I0120 12:32:54.538351  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.538362  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:54.538379  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:54.538400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:54.620638  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:54.620683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:54.657830  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:54.657859  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:54.707420  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:54.707448  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:54.719611  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:54.719640  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:54.784727  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:53.643750  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.141402  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:55.684905  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.686081  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.467036  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.468431  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.469379  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.285771  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:57.298606  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:57.298677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:57.330216  993585 cri.go:89] found id: ""
	I0120 12:32:57.330245  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.330254  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:57.330260  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:57.330317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:57.362111  993585 cri.go:89] found id: ""
	I0120 12:32:57.362152  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.362162  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:57.362169  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:57.362220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:57.395597  993585 cri.go:89] found id: ""
	I0120 12:32:57.395624  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.395634  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:57.395640  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:57.395700  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:57.425897  993585 cri.go:89] found id: ""
	I0120 12:32:57.425925  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.425933  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:57.425939  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:57.425986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:57.458500  993585 cri.go:89] found id: ""
	I0120 12:32:57.458544  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.458554  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:57.458563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:57.458625  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:57.489583  993585 cri.go:89] found id: ""
	I0120 12:32:57.489616  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.489626  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:57.489634  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:57.489685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:57.520588  993585 cri.go:89] found id: ""
	I0120 12:32:57.520617  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.520624  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:57.520630  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:57.520676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:57.555799  993585 cri.go:89] found id: ""
	I0120 12:32:57.555824  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.555833  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:57.555843  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:57.555855  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:57.605038  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:57.605071  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:57.619575  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:57.619603  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:57.686685  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:57.686703  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:57.686731  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:57.762968  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:57.763003  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:00.306647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:00.321029  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:00.321083  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:00.355924  993585 cri.go:89] found id: ""
	I0120 12:33:00.355954  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.355963  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:00.355969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:00.356021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:00.390766  993585 cri.go:89] found id: ""
	I0120 12:33:00.390793  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.390801  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:00.390807  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:00.390855  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:00.424790  993585 cri.go:89] found id: ""
	I0120 12:33:00.424820  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.424828  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:00.424833  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:00.424880  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:00.454941  993585 cri.go:89] found id: ""
	I0120 12:33:00.454975  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.454987  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:00.454995  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:00.455056  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:00.488642  993585 cri.go:89] found id: ""
	I0120 12:33:00.488670  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.488679  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:00.488684  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:00.488731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:00.518470  993585 cri.go:89] found id: ""
	I0120 12:33:00.518501  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.518511  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:00.518535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:00.518595  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:00.554139  993585 cri.go:89] found id: ""
	I0120 12:33:00.554167  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.554174  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:00.554180  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:00.554236  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:00.587766  993585 cri.go:89] found id: ""
	I0120 12:33:00.587792  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.587799  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:00.587809  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:00.587821  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:00.639504  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:00.639541  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:00.651660  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:00.651687  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:00.725669  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:00.725697  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:00.725716  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:00.806460  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:00.806496  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:58.642200  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:01.142620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.184931  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.684980  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.967537  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:05.467661  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:03.341420  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:03.354948  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:03.355022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:03.389867  993585 cri.go:89] found id: ""
	I0120 12:33:03.389965  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.389977  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:03.389986  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:03.390042  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:03.421478  993585 cri.go:89] found id: ""
	I0120 12:33:03.421505  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.421517  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:03.421525  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:03.421593  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:03.453805  993585 cri.go:89] found id: ""
	I0120 12:33:03.453838  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.453850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:03.453858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:03.453917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:03.487503  993585 cri.go:89] found id: ""
	I0120 12:33:03.487536  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.487547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:03.487555  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:03.487621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:03.517560  993585 cri.go:89] found id: ""
	I0120 12:33:03.517585  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.517594  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:03.517602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:03.517661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:03.547328  993585 cri.go:89] found id: ""
	I0120 12:33:03.547368  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.547380  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:03.547389  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:03.547447  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:03.580215  993585 cri.go:89] found id: ""
	I0120 12:33:03.580242  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.580251  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:03.580256  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:03.580319  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:03.613176  993585 cri.go:89] found id: ""
	I0120 12:33:03.613208  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.613220  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:03.613233  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:03.613247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:03.667093  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:03.667129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:03.680234  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:03.680260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:03.744763  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:03.744788  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:03.744805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.824813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:03.824856  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.364296  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:06.377247  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:06.377314  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:06.408701  993585 cri.go:89] found id: ""
	I0120 12:33:06.408725  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.408733  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:06.408738  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:06.408800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:06.440716  993585 cri.go:89] found id: ""
	I0120 12:33:06.440744  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.440752  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:06.440758  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:06.440811  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:06.471832  993585 cri.go:89] found id: ""
	I0120 12:33:06.471866  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.471877  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:06.471884  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:06.471947  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:06.504122  993585 cri.go:89] found id: ""
	I0120 12:33:06.504149  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.504157  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:06.504163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:06.504214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:06.535353  993585 cri.go:89] found id: ""
	I0120 12:33:06.535386  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.535397  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:06.535405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:06.535460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:06.571284  993585 cri.go:89] found id: ""
	I0120 12:33:06.571309  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.571316  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:06.571322  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:06.571379  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:06.604008  993585 cri.go:89] found id: ""
	I0120 12:33:06.604042  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.604055  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:06.604062  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:06.604142  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:06.636221  993585 cri.go:89] found id: ""
	I0120 12:33:06.636258  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.636270  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:06.636284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:06.636299  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.671820  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:06.671845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:06.723338  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:06.723369  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:06.736258  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:06.736285  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:06.807310  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:06.807336  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:06.807352  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.642811  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:06.141374  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:04.685422  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.184287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.185215  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.469260  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.967169  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.386909  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:09.399300  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:09.399363  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:09.431976  993585 cri.go:89] found id: ""
	I0120 12:33:09.432013  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.432025  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:09.432032  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:09.432085  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:09.468016  993585 cri.go:89] found id: ""
	I0120 12:33:09.468042  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.468053  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:09.468061  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:09.468124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:09.501613  993585 cri.go:89] found id: ""
	I0120 12:33:09.501648  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.501657  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:09.501667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:09.501734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:09.535261  993585 cri.go:89] found id: ""
	I0120 12:33:09.535296  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.535308  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:09.535315  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:09.535382  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:09.569838  993585 cri.go:89] found id: ""
	I0120 12:33:09.569873  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.569885  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:09.569893  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:09.569961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:09.601673  993585 cri.go:89] found id: ""
	I0120 12:33:09.601701  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.601709  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:09.601714  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:09.601773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:09.638035  993585 cri.go:89] found id: ""
	I0120 12:33:09.638068  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.638080  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:09.638089  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:09.638155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:09.671128  993585 cri.go:89] found id: ""
	I0120 12:33:09.671149  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.671156  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:09.671165  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:09.671178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:09.723616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:09.723648  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:09.737987  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:09.738020  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:09.810583  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:09.810613  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:09.810627  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:09.887641  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:09.887676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:08.141896  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:10.642250  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.685128  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.686705  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.968039  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.962039  992109 pod_ready.go:82] duration metric: took 4m0.001004044s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" ...
	E0120 12:33:13.962067  992109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:33:13.962099  992109 pod_ready.go:39] duration metric: took 4m14.545589853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:13.962140  992109 kubeadm.go:597] duration metric: took 4m21.118193658s to restartPrimaryControlPlane
	W0120 12:33:13.962239  992109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:33:13.962281  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:33:12.423728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:12.437277  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:12.437368  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:12.470427  993585 cri.go:89] found id: ""
	I0120 12:33:12.470455  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.470463  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:12.470468  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:12.470546  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:12.501063  993585 cri.go:89] found id: ""
	I0120 12:33:12.501103  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.501130  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:12.501138  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:12.501287  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:12.535254  993585 cri.go:89] found id: ""
	I0120 12:33:12.535284  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.535295  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:12.535303  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:12.535354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:12.568250  993585 cri.go:89] found id: ""
	I0120 12:33:12.568289  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.568301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:12.568307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:12.568372  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:12.599927  993585 cri.go:89] found id: ""
	I0120 12:33:12.599961  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.599970  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:12.599976  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:12.600031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:12.632502  993585 cri.go:89] found id: ""
	I0120 12:33:12.632537  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.632549  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:12.632559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:12.632620  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:12.664166  993585 cri.go:89] found id: ""
	I0120 12:33:12.664200  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.664208  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:12.664216  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:12.664270  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:12.697996  993585 cri.go:89] found id: ""
	I0120 12:33:12.698028  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.698039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:12.698054  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:12.698070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:12.751712  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:12.751745  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:12.765184  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:12.765213  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:12.830999  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:12.831027  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:12.831046  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:12.911211  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:12.911244  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:15.449634  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:15.464863  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:15.464931  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:15.495576  993585 cri.go:89] found id: ""
	I0120 12:33:15.495609  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.495620  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:15.495629  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:15.495689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:15.525730  993585 cri.go:89] found id: ""
	I0120 12:33:15.525757  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.525767  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:15.525775  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:15.525832  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:15.556077  993585 cri.go:89] found id: ""
	I0120 12:33:15.556117  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.556127  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:15.556135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:15.556195  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:15.585820  993585 cri.go:89] found id: ""
	I0120 12:33:15.585852  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.585860  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:15.585867  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:15.585924  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:15.615985  993585 cri.go:89] found id: ""
	I0120 12:33:15.616027  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.616035  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:15.616041  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:15.616093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:15.648570  993585 cri.go:89] found id: ""
	I0120 12:33:15.648604  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.648611  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:15.648617  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:15.648664  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:15.678674  993585 cri.go:89] found id: ""
	I0120 12:33:15.678704  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.678714  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:15.678721  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:15.678786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:15.708444  993585 cri.go:89] found id: ""
	I0120 12:33:15.708468  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.708476  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:15.708485  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:15.708500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:15.758053  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:15.758083  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:15.770661  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:15.770688  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:15.833234  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:15.833257  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:15.833271  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:15.906939  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:15.906969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:13.142031  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:15.642742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:16.184659  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.185053  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.442922  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:18.455489  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:18.455557  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:18.495102  993585 cri.go:89] found id: ""
	I0120 12:33:18.495135  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.495145  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:18.495154  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:18.495225  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:18.530047  993585 cri.go:89] found id: ""
	I0120 12:33:18.530078  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.530094  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:18.530102  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:18.530165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:18.566556  993585 cri.go:89] found id: ""
	I0120 12:33:18.566585  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.566595  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:18.566602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:18.566661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:18.604783  993585 cri.go:89] found id: ""
	I0120 12:33:18.604819  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.604834  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:18.604842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:18.604913  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:18.638998  993585 cri.go:89] found id: ""
	I0120 12:33:18.639025  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.639036  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:18.639043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:18.639107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:18.669083  993585 cri.go:89] found id: ""
	I0120 12:33:18.669121  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.669130  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:18.669136  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:18.669192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:18.701062  993585 cri.go:89] found id: ""
	I0120 12:33:18.701089  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.701097  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:18.701115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:18.701180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:18.732086  993585 cri.go:89] found id: ""
	I0120 12:33:18.732131  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.732142  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:18.732157  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:18.732174  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:18.779325  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:18.779357  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:18.792530  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:18.792565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:18.863429  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:18.863452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:18.863464  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:18.941343  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:18.941375  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:21.481380  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:21.493618  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:21.493699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:21.524040  993585 cri.go:89] found id: ""
	I0120 12:33:21.524067  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.524075  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:21.524081  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:21.524149  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:21.554666  993585 cri.go:89] found id: ""
	I0120 12:33:21.554698  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.554708  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:21.554715  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:21.554762  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:21.585584  993585 cri.go:89] found id: ""
	I0120 12:33:21.585610  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.585617  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:21.585623  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:21.585670  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:21.615611  993585 cri.go:89] found id: ""
	I0120 12:33:21.615646  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.615657  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:21.615666  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:21.615715  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:21.646761  993585 cri.go:89] found id: ""
	I0120 12:33:21.646788  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.646796  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:21.646801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:21.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:21.681380  993585 cri.go:89] found id: ""
	I0120 12:33:21.681410  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.681420  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:21.681428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:21.681488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:21.712708  993585 cri.go:89] found id: ""
	I0120 12:33:21.712743  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.712759  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:21.712766  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:21.712828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:21.746105  993585 cri.go:89] found id: ""
	I0120 12:33:21.746132  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.746140  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:21.746150  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:21.746162  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:21.795702  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:21.795744  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:21.807548  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:21.807570  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:21.869605  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:21.869627  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:21.869646  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:21.941092  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:21.941120  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:18.142112  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.642242  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.185265  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:22.684404  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.487520  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:24.501031  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:24.501119  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:24.533191  993585 cri.go:89] found id: ""
	I0120 12:33:24.533220  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.533230  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:24.533237  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:24.533300  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:24.565809  993585 cri.go:89] found id: ""
	I0120 12:33:24.565837  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.565845  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:24.565850  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:24.565902  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:24.600607  993585 cri.go:89] found id: ""
	I0120 12:33:24.600643  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.600655  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:24.600663  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:24.600742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:24.637320  993585 cri.go:89] found id: ""
	I0120 12:33:24.637354  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.637365  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:24.637373  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:24.637433  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:24.674906  993585 cri.go:89] found id: ""
	I0120 12:33:24.674940  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.674952  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:24.674960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:24.675024  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:24.707058  993585 cri.go:89] found id: ""
	I0120 12:33:24.707084  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.707091  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:24.707097  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:24.707159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:24.740554  993585 cri.go:89] found id: ""
	I0120 12:33:24.740590  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.740603  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:24.740614  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:24.740680  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:24.773021  993585 cri.go:89] found id: ""
	I0120 12:33:24.773052  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.773064  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:24.773077  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:24.773094  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:24.863129  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:24.863156  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:24.863169  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:24.939479  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:24.939516  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:24.975325  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:24.975358  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:25.026952  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:25.026993  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:23.141922  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:25.142300  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.685216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:26.687261  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:29.183496  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:27.539957  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:27.553387  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:27.553449  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:27.587773  993585 cri.go:89] found id: ""
	I0120 12:33:27.587804  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.587812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:27.587818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:27.587868  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:27.617735  993585 cri.go:89] found id: ""
	I0120 12:33:27.617767  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.617777  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:27.617785  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:27.617865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:27.652958  993585 cri.go:89] found id: ""
	I0120 12:33:27.652978  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.652985  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:27.652990  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:27.653047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:27.686924  993585 cri.go:89] found id: ""
	I0120 12:33:27.686947  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.686954  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:27.686960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:27.687012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:27.720217  993585 cri.go:89] found id: ""
	I0120 12:33:27.720246  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.720258  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:27.720265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:27.720334  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:27.757382  993585 cri.go:89] found id: ""
	I0120 12:33:27.757418  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.757430  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:27.757438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:27.757504  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:27.788498  993585 cri.go:89] found id: ""
	I0120 12:33:27.788528  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.788538  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:27.788546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:27.788616  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:27.820146  993585 cri.go:89] found id: ""
	I0120 12:33:27.820178  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.820186  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:27.820196  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:27.820207  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:27.832201  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:27.832225  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:27.905179  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:27.905202  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:27.905227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:27.984792  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:27.984829  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:28.027290  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:28.027397  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.578691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:30.591302  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:30.591365  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:30.627747  993585 cri.go:89] found id: ""
	I0120 12:33:30.627775  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.627802  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:30.627810  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:30.627881  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:30.674653  993585 cri.go:89] found id: ""
	I0120 12:33:30.674684  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.674694  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:30.674702  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:30.674766  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:30.716811  993585 cri.go:89] found id: ""
	I0120 12:33:30.716839  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.716850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:30.716857  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:30.716922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:30.749623  993585 cri.go:89] found id: ""
	I0120 12:33:30.749655  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.749666  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:30.749674  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:30.749742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:30.780140  993585 cri.go:89] found id: ""
	I0120 12:33:30.780172  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.780180  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:30.780186  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:30.780241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:30.808356  993585 cri.go:89] found id: ""
	I0120 12:33:30.808387  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.808395  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:30.808407  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:30.808476  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:30.842019  993585 cri.go:89] found id: ""
	I0120 12:33:30.842047  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.842054  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:30.842060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:30.842109  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:30.871526  993585 cri.go:89] found id: ""
	I0120 12:33:30.871551  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.871559  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:30.871568  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:30.871581  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.919022  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:30.919051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:30.931897  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:30.931933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:30.993261  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:30.993282  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:30.993296  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:31.069346  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:31.069384  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:27.642074  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:30.142170  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:31.184534  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.184696  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.606755  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:33.619163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:33.619232  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:33.654390  993585 cri.go:89] found id: ""
	I0120 12:33:33.654423  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.654432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:33.654438  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:33.654487  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:33.689183  993585 cri.go:89] found id: ""
	I0120 12:33:33.689218  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.689230  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:33.689239  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:33.689302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:33.720803  993585 cri.go:89] found id: ""
	I0120 12:33:33.720832  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.720839  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:33.720845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:33.720893  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:33.755948  993585 cri.go:89] found id: ""
	I0120 12:33:33.755985  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.755995  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:33.756003  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:33.756071  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:33.788407  993585 cri.go:89] found id: ""
	I0120 12:33:33.788444  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.788457  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:33.788466  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:33.788524  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:33.819077  993585 cri.go:89] found id: ""
	I0120 12:33:33.819102  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.819109  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:33.819115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:33.819164  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:33.848263  993585 cri.go:89] found id: ""
	I0120 12:33:33.848288  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.848296  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:33.848301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:33.848347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:33.877393  993585 cri.go:89] found id: ""
	I0120 12:33:33.877428  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.877439  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:33.877451  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:33.877462  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:33.928766  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:33.928796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:33.941450  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:33.941474  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:34.004416  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:34.004446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:34.004461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:34.079056  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:34.079088  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:36.622644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:36.634862  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:36.634939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:36.670074  993585 cri.go:89] found id: ""
	I0120 12:33:36.670113  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.670124  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:36.670132  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:36.670189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:36.706117  993585 cri.go:89] found id: ""
	I0120 12:33:36.706152  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.706159  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:36.706164  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:36.706219  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:36.741133  993585 cri.go:89] found id: ""
	I0120 12:33:36.741167  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.741177  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:36.741185  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:36.741242  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:36.773791  993585 cri.go:89] found id: ""
	I0120 12:33:36.773819  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.773830  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:36.773837  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:36.773901  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:36.807401  993585 cri.go:89] found id: ""
	I0120 12:33:36.807432  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.807440  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:36.807447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:36.807500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:36.839815  993585 cri.go:89] found id: ""
	I0120 12:33:36.839850  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.839861  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:36.839870  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:36.839934  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:36.868579  993585 cri.go:89] found id: ""
	I0120 12:33:36.868610  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.868620  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:36.868626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:36.868685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:36.898430  993585 cri.go:89] found id: ""
	I0120 12:33:36.898455  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.898462  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:36.898475  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:36.898490  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:36.947718  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:36.947758  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:32.641645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.141557  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.141719  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.684708  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.685419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:36.962705  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:36.962740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:37.053761  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:37.053792  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:37.053805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:37.148364  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:37.148400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.690060  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:39.702447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:39.702516  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:39.733846  993585 cri.go:89] found id: ""
	I0120 12:33:39.733868  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.733876  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:39.733883  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:39.733939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:39.762657  993585 cri.go:89] found id: ""
	I0120 12:33:39.762682  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.762690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:39.762695  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:39.762743  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:39.794803  993585 cri.go:89] found id: ""
	I0120 12:33:39.794832  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.794841  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:39.794847  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:39.794891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:39.823584  993585 cri.go:89] found id: ""
	I0120 12:33:39.823614  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.823625  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:39.823633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:39.823689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:39.851954  993585 cri.go:89] found id: ""
	I0120 12:33:39.851978  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.851985  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:39.851991  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:39.852091  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:39.881315  993585 cri.go:89] found id: ""
	I0120 12:33:39.881347  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.881358  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:39.881367  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:39.881428  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:39.911797  993585 cri.go:89] found id: ""
	I0120 12:33:39.911827  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.911836  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:39.911841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:39.911887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:39.941625  993585 cri.go:89] found id: ""
	I0120 12:33:39.941653  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.941661  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:39.941671  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:39.941683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:39.991689  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:39.991718  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:40.004850  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:40.004871  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:40.069863  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:40.069883  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:40.069894  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:40.149093  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:40.149129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.142612  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.145567  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:40.184106  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:42.184765  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.582218  992109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.61991226s)
	I0120 12:33:41.582297  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:33:41.597367  992109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:33:41.606890  992109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:33:41.615799  992109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:33:41.615823  992109 kubeadm.go:157] found existing configuration files:
	
	I0120 12:33:41.615890  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:33:41.624548  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:33:41.624613  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:33:41.634296  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:33:41.645019  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:33:41.645069  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:33:41.653988  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.662620  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:33:41.662661  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.671164  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:33:41.679068  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:33:41.679121  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:33:41.687730  992109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:33:41.842158  992109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:33:42.692596  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:42.710550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:42.710636  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:42.761626  993585 cri.go:89] found id: ""
	I0120 12:33:42.761665  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.761677  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:42.761685  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:42.761753  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:42.825148  993585 cri.go:89] found id: ""
	I0120 12:33:42.825181  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.825191  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:42.825196  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:42.825258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:42.859035  993585 cri.go:89] found id: ""
	I0120 12:33:42.859066  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.859075  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:42.859081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:42.859134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:42.890335  993585 cri.go:89] found id: ""
	I0120 12:33:42.890364  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.890372  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:42.890378  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:42.890442  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:42.929857  993585 cri.go:89] found id: ""
	I0120 12:33:42.929882  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.929890  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:42.929896  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:42.929944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:42.960830  993585 cri.go:89] found id: ""
	I0120 12:33:42.960864  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.960874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:42.960882  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:42.960948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:42.995324  993585 cri.go:89] found id: ""
	I0120 12:33:42.995354  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.995368  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:42.995374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:42.995424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:43.028259  993585 cri.go:89] found id: ""
	I0120 12:33:43.028286  993585 logs.go:282] 0 containers: []
	W0120 12:33:43.028294  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:43.028306  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:43.028316  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:43.079487  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:43.079517  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.091452  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:43.091475  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:43.153152  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:43.153178  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:43.153192  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:43.236284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:43.236325  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:45.774706  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:45.791967  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:45.792052  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:45.824678  993585 cri.go:89] found id: ""
	I0120 12:33:45.824710  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.824720  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:45.824729  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:45.824793  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:45.857843  993585 cri.go:89] found id: ""
	I0120 12:33:45.857876  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.857885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:45.857891  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:45.857944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:45.898182  993585 cri.go:89] found id: ""
	I0120 12:33:45.898215  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.898227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:45.898235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:45.898302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:45.929223  993585 cri.go:89] found id: ""
	I0120 12:33:45.929259  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.929272  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:45.929282  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:45.929380  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:45.960800  993585 cri.go:89] found id: ""
	I0120 12:33:45.960849  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.960870  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:45.960879  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:45.960957  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:45.997846  993585 cri.go:89] found id: ""
	I0120 12:33:45.997878  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.997889  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:45.997897  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:45.997969  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:46.033227  993585 cri.go:89] found id: ""
	I0120 12:33:46.033267  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.033278  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:46.033286  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:46.033354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:46.066691  993585 cri.go:89] found id: ""
	I0120 12:33:46.066723  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.066733  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:46.066746  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:46.066763  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:46.133257  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:46.133280  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:46.133293  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:46.232667  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:46.232720  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:46.274332  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:46.274371  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:46.327098  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:46.327142  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.642109  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:45.643138  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:44.686233  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:47.185408  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.186465  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.627545  992109 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:33:49.627631  992109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:33:49.627743  992109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:33:49.627898  992109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:33:49.628021  992109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:33:49.628110  992109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:33:49.629521  992109 out.go:235]   - Generating certificates and keys ...
	I0120 12:33:49.629586  992109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:33:49.629652  992109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:33:49.629732  992109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:33:49.629811  992109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:33:49.629945  992109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:33:49.630101  992109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:33:49.630179  992109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:33:49.630255  992109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:33:49.630331  992109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:33:49.630426  992109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:33:49.630491  992109 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:33:49.630586  992109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:33:49.630669  992109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:33:49.630752  992109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:33:49.630819  992109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:33:49.630898  992109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:33:49.630946  992109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:33:49.631065  992109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:33:49.631148  992109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:33:49.632352  992109 out.go:235]   - Booting up control plane ...
	I0120 12:33:49.632439  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:33:49.632500  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:33:49.632581  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:33:49.632734  992109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:33:49.632818  992109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:33:49.632854  992109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:33:49.632972  992109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:33:49.633093  992109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:33:49.633183  992109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.459324ms
	I0120 12:33:49.633288  992109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:33:49.633376  992109 kubeadm.go:310] [api-check] The API server is healthy after 5.002077681s
	I0120 12:33:49.633495  992109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:33:49.633603  992109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:33:49.633652  992109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:33:49.633813  992109 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-496524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:33:49.633900  992109 kubeadm.go:310] [bootstrap-token] Using token: sww9nb.rwz5issf9tlw104y
	I0120 12:33:49.635315  992109 out.go:235]   - Configuring RBAC rules ...
	I0120 12:33:49.635441  992109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:33:49.635546  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:33:49.635673  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:33:49.635790  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:33:49.635890  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:33:49.635965  992109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:33:49.636063  992109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:33:49.636105  992109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:33:49.636151  992109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:33:49.636157  992109 kubeadm.go:310] 
	I0120 12:33:49.636247  992109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:33:49.636272  992109 kubeadm.go:310] 
	I0120 12:33:49.636388  992109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:33:49.636400  992109 kubeadm.go:310] 
	I0120 12:33:49.636441  992109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:33:49.636523  992109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:33:49.636598  992109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:33:49.636608  992109 kubeadm.go:310] 
	I0120 12:33:49.636714  992109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:33:49.636738  992109 kubeadm.go:310] 
	I0120 12:33:49.636800  992109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:33:49.636810  992109 kubeadm.go:310] 
	I0120 12:33:49.636874  992109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:33:49.636984  992109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:33:49.637071  992109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:33:49.637082  992109 kubeadm.go:310] 
	I0120 12:33:49.637206  992109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:33:49.637348  992109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:33:49.637365  992109 kubeadm.go:310] 
	I0120 12:33:49.637484  992109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.637627  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:33:49.637685  992109 kubeadm.go:310] 	--control-plane 
	I0120 12:33:49.637704  992109 kubeadm.go:310] 
	I0120 12:33:49.637810  992109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:33:49.637826  992109 kubeadm.go:310] 
	I0120 12:33:49.637934  992109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.638086  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:33:49.638103  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:33:49.638112  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:33:49.639791  992109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:33:49.641114  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:33:49.651726  992109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:33:49.670543  992109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:33:49.670636  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.670688  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-496524 minikube.k8s.io/updated_at=2025_01_20T12_33_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-496524 minikube.k8s.io/primary=true
	I0120 12:33:49.704840  992109 ops.go:34] apiserver oom_adj: -16
	I0120 12:33:49.859209  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.359791  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.859509  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:48.841385  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:48.854037  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:48.854105  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:48.889959  993585 cri.go:89] found id: ""
	I0120 12:33:48.889996  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.890008  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:48.890017  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:48.890084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.926271  993585 cri.go:89] found id: ""
	I0120 12:33:48.926313  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.926326  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:48.926334  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:48.926409  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:48.962768  993585 cri.go:89] found id: ""
	I0120 12:33:48.962803  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.962816  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:48.962825  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:48.962895  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:48.998039  993585 cri.go:89] found id: ""
	I0120 12:33:48.998075  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.998086  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:48.998093  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:48.998161  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:49.038710  993585 cri.go:89] found id: ""
	I0120 12:33:49.038745  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.038756  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:49.038765  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:49.038835  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:49.074829  993585 cri.go:89] found id: ""
	I0120 12:33:49.074863  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.074874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:49.074883  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:49.074950  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:49.115354  993585 cri.go:89] found id: ""
	I0120 12:33:49.115383  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.115392  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:49.115397  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:49.115446  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:49.152837  993585 cri.go:89] found id: ""
	I0120 12:33:49.152870  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.152880  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:49.152892  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:49.152906  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:49.194817  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:49.194842  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:49.247223  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:49.247255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:49.259939  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:49.259965  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:49.326047  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:49.326081  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:49.326108  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:51.904391  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:51.916726  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:51.916806  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:51.950574  993585 cri.go:89] found id: ""
	I0120 12:33:51.950602  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.950610  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:51.950619  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:51.950683  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.141455  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:50.142912  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:51.359718  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:51.859742  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.359728  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.859803  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.359731  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.859729  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.963052  992109 kubeadm.go:1113] duration metric: took 4.292471944s to wait for elevateKubeSystemPrivileges
	I0120 12:33:53.963109  992109 kubeadm.go:394] duration metric: took 5m1.161906665s to StartCluster
	I0120 12:33:53.963139  992109 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.963257  992109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:33:53.964929  992109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.965243  992109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:33:53.965321  992109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:33:53.965437  992109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-496524"
	I0120 12:33:53.965452  992109 addons.go:69] Setting dashboard=true in profile "no-preload-496524"
	I0120 12:33:53.965477  992109 addons.go:238] Setting addon storage-provisioner=true in "no-preload-496524"
	W0120 12:33:53.965487  992109 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:33:53.965490  992109 addons.go:238] Setting addon dashboard=true in "no-preload-496524"
	I0120 12:33:53.965481  992109 addons.go:69] Setting default-storageclass=true in profile "no-preload-496524"
	W0120 12:33:53.965502  992109 addons.go:247] addon dashboard should already be in state true
	I0120 12:33:53.965518  992109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-496524"
	I0120 12:33:53.965520  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:33:53.965528  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965534  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965514  992109 addons.go:69] Setting metrics-server=true in profile "no-preload-496524"
	I0120 12:33:53.965570  992109 addons.go:238] Setting addon metrics-server=true in "no-preload-496524"
	W0120 12:33:53.965584  992109 addons.go:247] addon metrics-server should already be in state true
	I0120 12:33:53.965628  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965928  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965934  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965947  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965963  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.965985  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966029  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.966054  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966065  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966567  992109 out.go:177] * Verifying Kubernetes components...
	I0120 12:33:53.967881  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:33:53.983553  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0120 12:33:53.984079  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.984654  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.984681  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.985111  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.985353  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:53.986475  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 12:33:53.986716  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0120 12:33:53.987021  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987492  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987571  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.987588  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.987741  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0120 12:33:53.987942  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.988075  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.988425  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988440  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988577  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.988627  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.988783  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988797  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988855  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989000  992109 addons.go:238] Setting addon default-storageclass=true in "no-preload-496524"
	W0120 12:33:53.989019  992109 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:33:53.989052  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.989187  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989393  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989420  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989431  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989455  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989672  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989711  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.005609  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0120 12:33:54.006182  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.006760  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.006786  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.007131  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0120 12:33:54.007443  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.008065  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:54.008108  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.008308  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0120 12:33:54.008359  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.008993  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.009021  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.009407  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.009597  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.011591  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.013572  992109 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:33:54.014814  992109 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:33:54.015103  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.015538  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.015562  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.015921  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:33:54.015946  992109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:33:54.015970  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.015997  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.016619  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.018868  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.019948  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020370  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.020397  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020522  992109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:33:54.020716  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.020885  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.020989  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.021095  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.021561  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:33:54.021576  992109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:33:54.021592  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.024577  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.024641  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024669  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.024695  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024723  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.024878  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.025140  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.032584  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0120 12:33:54.032936  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.033474  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.033497  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.033809  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.034011  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.035349  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.035539  992109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.035557  992109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:33:54.035573  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.037812  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038056  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.038080  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038193  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.038321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.038429  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.038547  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.041727  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0120 12:33:54.042162  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.042671  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.042694  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.043048  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.043263  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.044523  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.046748  992109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:33:51.190620  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:53.685783  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:54.048049  992109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.048070  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:33:54.048087  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.050560  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051116  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.051143  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051300  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.051493  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.051649  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.051769  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.174035  992109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:33:54.197637  992109 node_ready.go:35] waiting up to 6m0s for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210713  992109 node_ready.go:49] node "no-preload-496524" has status "Ready":"True"
	I0120 12:33:54.210742  992109 node_ready.go:38] duration metric: took 13.074849ms for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210757  992109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:54.218615  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:54.300046  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:33:54.300080  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:33:54.351225  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.353768  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:33:54.353789  992109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:33:54.368467  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:33:54.368496  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:33:54.371467  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.389639  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:33:54.389660  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:33:54.401448  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.401467  992109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:33:54.465233  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.465824  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:33:54.465853  992109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:33:54.543139  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:33:54.543178  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:33:54.687210  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:33:54.687234  992109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:33:54.744978  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:33:54.745012  992109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:33:54.771298  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:33:54.771332  992109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:33:54.852878  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:33:54.852914  992109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:33:54.886329  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:54.886362  992109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:33:54.964102  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:55.906127  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.534613086s)
	I0120 12:33:55.906207  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906212  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.554946671s)
	I0120 12:33:55.906270  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.440998293s)
	I0120 12:33:55.906220  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906307  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906338  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906275  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906404  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906812  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.906854  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906855  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906862  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906874  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906877  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906883  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906886  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906893  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907039  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907058  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.907081  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.907090  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907187  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.907189  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907213  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908759  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.908766  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.908783  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908801  992109 addons.go:479] Verifying addon metrics-server=true in "no-preload-496524"
	I0120 12:33:55.909118  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.909137  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.939415  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.939434  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.939756  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.939772  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.225171  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.900293  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.936108167s)
	I0120 12:33:56.900402  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900428  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.900904  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.900913  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:56.900924  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.900945  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900952  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.901226  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.901246  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.902642  992109 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-496524 addons enable metrics-server
	
	I0120 12:33:56.904289  992109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0120 12:33:51.982905  993585 cri.go:89] found id: ""
	I0120 12:33:51.982931  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.982939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:51.982950  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:51.982998  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:52.017989  993585 cri.go:89] found id: ""
	I0120 12:33:52.018029  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.018041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:52.018049  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:52.018117  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:52.050405  993585 cri.go:89] found id: ""
	I0120 12:33:52.050432  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.050442  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:52.050450  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:52.050540  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:52.080729  993585 cri.go:89] found id: ""
	I0120 12:33:52.080760  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.080767  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:52.080773  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:52.080826  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:52.110809  993585 cri.go:89] found id: ""
	I0120 12:33:52.110839  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.110849  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:52.110856  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:52.110915  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:52.143357  993585 cri.go:89] found id: ""
	I0120 12:33:52.143387  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.143397  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:52.143405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:52.143475  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:52.179555  993585 cri.go:89] found id: ""
	I0120 12:33:52.179584  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.179594  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:52.179607  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:52.179622  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:52.268223  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:52.268257  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.304968  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:52.305008  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:52.354773  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:52.354811  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:52.366909  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:52.366933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:52.434038  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:54.934844  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:54.954370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:54.954453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:54.987088  993585 cri.go:89] found id: ""
	I0120 12:33:54.987124  993585 logs.go:282] 0 containers: []
	W0120 12:33:54.987136  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:54.987144  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:54.987207  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:55.020248  993585 cri.go:89] found id: ""
	I0120 12:33:55.020282  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.020293  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:55.020301  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:55.020374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:55.059488  993585 cri.go:89] found id: ""
	I0120 12:33:55.059529  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.059541  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:55.059550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:55.059614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:55.095049  993585 cri.go:89] found id: ""
	I0120 12:33:55.095088  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.095102  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:55.095112  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:55.095189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:55.131993  993585 cri.go:89] found id: ""
	I0120 12:33:55.132028  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.132039  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:55.132045  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:55.132107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:55.168716  993585 cri.go:89] found id: ""
	I0120 12:33:55.168744  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.168755  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:55.168764  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:55.168828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:55.211532  993585 cri.go:89] found id: ""
	I0120 12:33:55.211566  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.211578  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:55.211591  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:55.211658  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:55.245961  993585 cri.go:89] found id: ""
	I0120 12:33:55.245993  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.246004  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:55.246019  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:55.246036  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:55.297819  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:55.297865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:55.314469  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:55.314514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:55.386489  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:55.386544  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:55.386566  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:55.466897  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:55.466954  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.642467  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.143921  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.686287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.185263  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.905477  992109 addons.go:514] duration metric: took 2.940174389s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0120 12:33:57.224557  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.224585  992109 pod_ready.go:82] duration metric: took 3.005934718s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.224599  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.228981  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.228999  992109 pod_ready.go:82] duration metric: took 4.392102ms for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.229007  992109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:59.239998  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.014588  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:58.032828  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:58.032905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:58.075631  993585 cri.go:89] found id: ""
	I0120 12:33:58.075671  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.075774  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:58.075801  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:58.075887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:58.117897  993585 cri.go:89] found id: ""
	I0120 12:33:58.117934  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.117945  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:58.117953  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:58.118022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:58.161106  993585 cri.go:89] found id: ""
	I0120 12:33:58.161138  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.161149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:58.161157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:58.161222  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:58.203869  993585 cri.go:89] found id: ""
	I0120 12:33:58.203905  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.203915  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:58.203923  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:58.203991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:58.247905  993585 cri.go:89] found id: ""
	I0120 12:33:58.247938  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.247949  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:58.247956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:58.248016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:58.281395  993585 cri.go:89] found id: ""
	I0120 12:33:58.281426  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.281437  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:58.281445  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:58.281506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:58.318950  993585 cri.go:89] found id: ""
	I0120 12:33:58.318982  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.318991  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:58.318996  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:58.319055  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:58.351052  993585 cri.go:89] found id: ""
	I0120 12:33:58.351080  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.351089  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:58.351107  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:58.351134  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:58.363459  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:58.363489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:58.427460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:58.427502  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:58.427520  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:58.502031  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:58.502065  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:58.539404  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:58.539434  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.093414  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:01.106353  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:01.106422  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:01.145552  993585 cri.go:89] found id: ""
	I0120 12:34:01.145588  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.145601  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:01.145610  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:01.145678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:01.179253  993585 cri.go:89] found id: ""
	I0120 12:34:01.179288  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.179299  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:01.179307  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:01.179374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:01.215878  993585 cri.go:89] found id: ""
	I0120 12:34:01.215916  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.215928  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:01.215937  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:01.216001  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:01.260751  993585 cri.go:89] found id: ""
	I0120 12:34:01.260783  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.260795  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:01.260807  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:01.260883  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:01.303022  993585 cri.go:89] found id: ""
	I0120 12:34:01.303053  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.303065  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:01.303074  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:01.303145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:01.342483  993585 cri.go:89] found id: ""
	I0120 12:34:01.342539  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.342552  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:01.342562  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:01.342642  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:01.374569  993585 cri.go:89] found id: ""
	I0120 12:34:01.374608  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.374618  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:01.374633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:01.374696  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:01.406807  993585 cri.go:89] found id: ""
	I0120 12:34:01.406838  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.406848  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:01.406862  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:01.406887  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:01.446081  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:01.446111  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.498826  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:01.498865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:01.512333  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:01.512370  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:01.591631  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:01.591658  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:01.591676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:57.641818  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.141288  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.142885  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.685449  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.688229  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:01.734840  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:03.790112  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:04.235638  992109 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.235671  992109 pod_ready.go:82] duration metric: took 7.006654161s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.235686  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240203  992109 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.240233  992109 pod_ready.go:82] duration metric: took 4.537744ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240248  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244405  992109 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.244431  992109 pod_ready.go:82] duration metric: took 4.172774ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244445  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248277  992109 pod_ready.go:93] pod "kube-proxy-dpn56" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.248303  992109 pod_ready.go:82] duration metric: took 3.849341ms for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248315  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.251995  992109 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.252016  992109 pod_ready.go:82] duration metric: took 3.69304ms for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.252025  992109 pod_ready.go:39] duration metric: took 10.041253574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:04.252040  992109 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:04.252101  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.288797  992109 api_server.go:72] duration metric: took 10.323505838s to wait for apiserver process to appear ...
	I0120 12:34:04.288829  992109 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:04.288878  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:34:04.297424  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:34:04.299152  992109 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:04.299176  992109 api_server.go:131] duration metric: took 10.340981ms to wait for apiserver health ...
	I0120 12:34:04.299188  992109 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:04.437151  992109 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:04.437187  992109 system_pods.go:61] "coredns-668d6bf9bc-8pf2c" [9402090c-afdc-4fd7-a673-155ca87b9afe] Running
	I0120 12:34:04.437194  992109 system_pods.go:61] "coredns-668d6bf9bc-rdj6t" [f7882da6-0b57-402a-a902-6c4e6a8c6cd1] Running
	I0120 12:34:04.437200  992109 system_pods.go:61] "etcd-no-preload-496524" [430610d7-4491-4d35-93d6-71738b1cad0f] Running
	I0120 12:34:04.437205  992109 system_pods.go:61] "kube-apiserver-no-preload-496524" [d028d3c0-5ee8-46cc-b8e5-95f7d07e43ca] Running
	I0120 12:34:04.437210  992109 system_pods.go:61] "kube-controller-manager-no-preload-496524" [b11b36da-c5a3-4fc6-8619-4f12fda64f63] Running
	I0120 12:34:04.437215  992109 system_pods.go:61] "kube-proxy-dpn56" [dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4] Running
	I0120 12:34:04.437219  992109 system_pods.go:61] "kube-scheduler-no-preload-496524" [80058f6c-526c-487f-82a5-74df5f2e0174] Running
	I0120 12:34:04.437227  992109 system_pods.go:61] "metrics-server-f79f97bbb-dbx78" [c8fb707c-75c2-42b6-802e-52a09222f9ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:04.437234  992109 system_pods.go:61] "storage-provisioner" [14187f8e-01fd-45ac-a749-82ba272b727f] Running
	I0120 12:34:04.437246  992109 system_pods.go:74] duration metric: took 138.05086ms to wait for pod list to return data ...
	I0120 12:34:04.437257  992109 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:04.636609  992109 default_sa.go:45] found service account: "default"
	I0120 12:34:04.636747  992109 default_sa.go:55] duration metric: took 199.476374ms for default service account to be created ...
	I0120 12:34:04.636770  992109 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:04.836002  992109 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:04.171834  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.189904  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:04.189975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:04.227671  993585 cri.go:89] found id: ""
	I0120 12:34:04.227705  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.227717  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:04.227725  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:04.227789  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:04.266288  993585 cri.go:89] found id: ""
	I0120 12:34:04.266319  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.266329  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:04.266337  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:04.266415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:04.303909  993585 cri.go:89] found id: ""
	I0120 12:34:04.303944  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.303952  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:04.303965  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:04.304029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:04.342095  993585 cri.go:89] found id: ""
	I0120 12:34:04.342135  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.342148  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:04.342156  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:04.342220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:04.374237  993585 cri.go:89] found id: ""
	I0120 12:34:04.374268  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.374290  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:04.374299  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:04.374383  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:04.407930  993585 cri.go:89] found id: ""
	I0120 12:34:04.407962  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.407973  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:04.407981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:04.408047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:04.444108  993585 cri.go:89] found id: ""
	I0120 12:34:04.444133  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.444140  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:04.444146  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:04.444208  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:04.482725  993585 cri.go:89] found id: ""
	I0120 12:34:04.482759  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.482770  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:04.482783  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:04.482796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:04.536692  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:04.536732  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:04.549928  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:04.549952  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:04.616622  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:04.616645  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:04.616661  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:04.701813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:04.701846  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:04.642669  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:05.136388  992635 pod_ready.go:82] duration metric: took 4m0.000888072s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:05.136424  992635 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:05.136487  992635 pod_ready.go:39] duration metric: took 4m15.539523942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:05.136548  992635 kubeadm.go:597] duration metric: took 4m23.239372129s to restartPrimaryControlPlane
	W0120 12:34:05.136646  992635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:05.136701  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:05.185480  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.185630  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:09.185867  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.245120  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:07.257846  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:07.257917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:07.293851  993585 cri.go:89] found id: ""
	I0120 12:34:07.293885  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.293898  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:07.293906  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:07.293970  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:07.328532  993585 cri.go:89] found id: ""
	I0120 12:34:07.328568  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.328579  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:07.328588  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:07.328652  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:07.362019  993585 cri.go:89] found id: ""
	I0120 12:34:07.362053  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.362065  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:07.362073  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:07.362136  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:07.394170  993585 cri.go:89] found id: ""
	I0120 12:34:07.394211  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.394223  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:07.394231  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:07.394303  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:07.426650  993585 cri.go:89] found id: ""
	I0120 12:34:07.426694  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.426711  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:07.426719  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:07.426786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:07.472659  993585 cri.go:89] found id: ""
	I0120 12:34:07.472695  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.472706  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:07.472715  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:07.472788  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:07.506741  993585 cri.go:89] found id: ""
	I0120 12:34:07.506768  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.506777  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:07.506782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:07.506845  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:07.543976  993585 cri.go:89] found id: ""
	I0120 12:34:07.544007  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.544017  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:07.544028  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:07.544039  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:07.618073  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:07.618109  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:07.633284  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:07.633332  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:07.703104  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:07.703134  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:07.703151  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:07.786367  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:07.786404  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:10.324611  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:10.337443  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:10.337513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:10.371387  993585 cri.go:89] found id: ""
	I0120 12:34:10.371421  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.371432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:10.371489  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:10.371545  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:10.403803  993585 cri.go:89] found id: ""
	I0120 12:34:10.403829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.403837  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:10.403843  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:10.403891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:10.434806  993585 cri.go:89] found id: ""
	I0120 12:34:10.434829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.434836  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:10.434841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:10.434897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:10.465821  993585 cri.go:89] found id: ""
	I0120 12:34:10.465849  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.465856  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:10.465861  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:10.465905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:10.497007  993585 cri.go:89] found id: ""
	I0120 12:34:10.497029  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.497037  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:10.497043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:10.497086  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:10.527026  993585 cri.go:89] found id: ""
	I0120 12:34:10.527050  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.527060  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:10.527069  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:10.527134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:10.557590  993585 cri.go:89] found id: ""
	I0120 12:34:10.557621  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.557631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:10.557638  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:10.557694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:10.587747  993585 cri.go:89] found id: ""
	I0120 12:34:10.587777  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.587787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:10.587799  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:10.587813  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:10.635855  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:10.635886  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:10.649110  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:10.649147  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:10.719339  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:10.719382  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:10.719399  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:10.791808  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:10.791839  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:11.684329  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.686198  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.343317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:13.356667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:13.356731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:13.388894  993585 cri.go:89] found id: ""
	I0120 12:34:13.388926  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.388937  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:13.388944  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:13.389013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:13.419319  993585 cri.go:89] found id: ""
	I0120 12:34:13.419350  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.419360  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:13.419374  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:13.419440  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:13.451302  993585 cri.go:89] found id: ""
	I0120 12:34:13.451328  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.451335  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:13.451345  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:13.451398  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:13.485033  993585 cri.go:89] found id: ""
	I0120 12:34:13.485062  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.485073  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:13.485079  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:13.485126  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:13.515362  993585 cri.go:89] found id: ""
	I0120 12:34:13.515392  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.515401  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:13.515410  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:13.515481  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:13.545307  993585 cri.go:89] found id: ""
	I0120 12:34:13.545356  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.545366  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:13.545374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:13.545436  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:13.575714  993585 cri.go:89] found id: ""
	I0120 12:34:13.575738  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.575745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:13.575751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:13.575805  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:13.606046  993585 cri.go:89] found id: ""
	I0120 12:34:13.606099  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.606112  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:13.606127  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:13.606145  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:13.667543  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:13.667567  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:13.667584  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:13.741766  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:13.741795  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:13.778095  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:13.778131  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:13.830514  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:13.830554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.343728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:16.356586  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:16.356665  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:16.390098  993585 cri.go:89] found id: ""
	I0120 12:34:16.390132  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.390146  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:16.390155  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:16.390228  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:16.422651  993585 cri.go:89] found id: ""
	I0120 12:34:16.422682  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.422690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:16.422699  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:16.422755  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:16.455349  993585 cri.go:89] found id: ""
	I0120 12:34:16.455378  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.455390  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:16.455398  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:16.455467  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:16.494862  993585 cri.go:89] found id: ""
	I0120 12:34:16.494893  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.494904  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:16.494911  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:16.494975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:16.526039  993585 cri.go:89] found id: ""
	I0120 12:34:16.526070  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.526079  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:16.526087  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:16.526159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:16.557323  993585 cri.go:89] found id: ""
	I0120 12:34:16.557360  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.557376  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:16.557382  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:16.557444  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:16.607483  993585 cri.go:89] found id: ""
	I0120 12:34:16.607516  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.607527  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:16.607535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:16.607600  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:16.639620  993585 cri.go:89] found id: ""
	I0120 12:34:16.639644  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.639654  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:16.639665  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:16.639681  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:16.675471  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:16.675500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:16.726780  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:16.726814  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.739029  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:16.739060  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:16.802705  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:16.802738  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:16.802752  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:16.185205  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:18.685055  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:19.379610  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:19.392739  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:19.392813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:19.423927  993585 cri.go:89] found id: ""
	I0120 12:34:19.423959  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.423971  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:19.423979  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:19.424049  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:19.455104  993585 cri.go:89] found id: ""
	I0120 12:34:19.455131  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.455140  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:19.455145  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:19.455192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:19.487611  993585 cri.go:89] found id: ""
	I0120 12:34:19.487642  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.487652  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:19.487664  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:19.487728  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:19.517582  993585 cri.go:89] found id: ""
	I0120 12:34:19.517613  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.517638  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:19.517665  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:19.517734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:19.549138  993585 cri.go:89] found id: ""
	I0120 12:34:19.549177  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.549190  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:19.549199  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:19.549263  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:19.584290  993585 cri.go:89] found id: ""
	I0120 12:34:19.584317  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.584328  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:19.584334  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:19.584384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:19.618867  993585 cri.go:89] found id: ""
	I0120 12:34:19.618900  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.618909  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:19.618915  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:19.618967  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:19.651916  993585 cri.go:89] found id: ""
	I0120 12:34:19.651956  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.651968  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:19.651981  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:19.651997  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:19.691207  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:19.691239  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:19.742403  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:19.742436  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:19.755212  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:19.755245  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:19.818642  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:19.818671  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:19.818686  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:21.184740  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:23.685218  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:22.398142  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:22.415423  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:22.415497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:22.450558  993585 cri.go:89] found id: ""
	I0120 12:34:22.450595  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.450606  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:22.450613  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:22.450672  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:22.481655  993585 cri.go:89] found id: ""
	I0120 12:34:22.481686  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.481697  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:22.481706  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:22.481773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:22.515465  993585 cri.go:89] found id: ""
	I0120 12:34:22.515498  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.515509  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:22.515516  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:22.515575  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:22.546538  993585 cri.go:89] found id: ""
	I0120 12:34:22.546566  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.546575  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:22.546583  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:22.546640  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:22.577112  993585 cri.go:89] found id: ""
	I0120 12:34:22.577140  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.577151  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:22.577158  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:22.577216  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:22.610604  993585 cri.go:89] found id: ""
	I0120 12:34:22.610640  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.610650  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:22.610657  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:22.610718  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:22.641708  993585 cri.go:89] found id: ""
	I0120 12:34:22.641737  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.641745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:22.641752  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:22.641818  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:22.671952  993585 cri.go:89] found id: ""
	I0120 12:34:22.671977  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.671984  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:22.671994  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:22.672004  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:22.722515  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:22.722552  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:22.734806  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:22.734827  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:22.797517  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:22.797554  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:22.797573  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:22.872821  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:22.872851  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.413129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:25.425926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:25.426021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:25.462540  993585 cri.go:89] found id: ""
	I0120 12:34:25.462574  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.462584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:25.462595  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:25.462650  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:25.493646  993585 cri.go:89] found id: ""
	I0120 12:34:25.493672  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.493679  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:25.493688  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:25.493732  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:25.529070  993585 cri.go:89] found id: ""
	I0120 12:34:25.529103  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.529126  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:25.529135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:25.529199  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:25.562199  993585 cri.go:89] found id: ""
	I0120 12:34:25.562225  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.562258  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:25.562265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:25.562329  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:25.597698  993585 cri.go:89] found id: ""
	I0120 12:34:25.597728  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.597739  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:25.597745  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:25.597794  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:25.632923  993585 cri.go:89] found id: ""
	I0120 12:34:25.632950  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.632961  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:25.632968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:25.633031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:25.664379  993585 cri.go:89] found id: ""
	I0120 12:34:25.664409  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.664419  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:25.664434  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:25.664490  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:25.694965  993585 cri.go:89] found id: ""
	I0120 12:34:25.694992  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.695002  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:25.695014  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:25.695027  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:25.742956  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:25.742987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:25.755095  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:25.755122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:25.822777  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:25.822807  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:25.822824  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:25.895354  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:25.895389  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.685681  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.183977  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.433411  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:28.445691  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:28.445750  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:28.475915  993585 cri.go:89] found id: ""
	I0120 12:34:28.475949  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.475961  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:28.475969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:28.476029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:28.506219  993585 cri.go:89] found id: ""
	I0120 12:34:28.506253  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.506264  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:28.506272  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:28.506332  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:28.539662  993585 cri.go:89] found id: ""
	I0120 12:34:28.539693  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.539704  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:28.539712  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:28.539775  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:28.570360  993585 cri.go:89] found id: ""
	I0120 12:34:28.570390  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.570398  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:28.570404  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:28.570466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:28.599217  993585 cri.go:89] found id: ""
	I0120 12:34:28.599242  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.599249  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:28.599255  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:28.599310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:28.629325  993585 cri.go:89] found id: ""
	I0120 12:34:28.629366  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.629378  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:28.629386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:28.629453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:28.659625  993585 cri.go:89] found id: ""
	I0120 12:34:28.659657  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.659668  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:28.659675  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:28.659734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:28.695195  993585 cri.go:89] found id: ""
	I0120 12:34:28.695222  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.695232  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:28.695242  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:28.695255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:28.756910  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:28.756942  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:28.771902  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:28.771932  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:28.859464  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:28.859491  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:28.859510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:28.931739  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:28.931769  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.472251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:31.484961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:31.485019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:31.518142  993585 cri.go:89] found id: ""
	I0120 12:34:31.518175  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.518187  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:31.518194  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:31.518241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:31.550125  993585 cri.go:89] found id: ""
	I0120 12:34:31.550187  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.550201  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:31.550210  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:31.550274  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:31.583805  993585 cri.go:89] found id: ""
	I0120 12:34:31.583834  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.583846  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:31.583854  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:31.583908  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:31.626186  993585 cri.go:89] found id: ""
	I0120 12:34:31.626209  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.626217  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:31.626223  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:31.626271  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:31.657467  993585 cri.go:89] found id: ""
	I0120 12:34:31.657507  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.657519  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:31.657527  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:31.657594  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:31.686983  993585 cri.go:89] found id: ""
	I0120 12:34:31.687008  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.687015  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:31.687021  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:31.687075  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:31.721602  993585 cri.go:89] found id: ""
	I0120 12:34:31.721632  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.721645  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:31.721651  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:31.721701  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:31.751369  993585 cri.go:89] found id: ""
	I0120 12:34:31.751394  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.751401  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:31.751412  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:31.751435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:31.816285  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:31.816327  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:31.816344  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:31.891930  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:31.891969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.927472  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:31.927503  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:32.776819  992635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.640090134s)
	I0120 12:34:32.776911  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:34:32.792110  992635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:34:32.801453  992635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:34:32.809836  992635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:34:32.809855  992635 kubeadm.go:157] found existing configuration files:
	
	I0120 12:34:32.809892  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:34:32.817968  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:34:32.818014  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:34:32.826142  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:34:32.834058  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:34:32.834109  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:34:32.842776  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.850601  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:34:32.850645  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.858854  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:34:32.866819  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:34:32.866860  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:34:32.875193  992635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:34:32.920522  992635 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:34:32.920570  992635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:34:33.023871  992635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:34:33.024001  992635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:34:33.024134  992635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:34:33.032806  992635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:34:33.035443  992635 out.go:235]   - Generating certificates and keys ...
	I0120 12:34:33.035549  992635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:34:33.035644  992635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:34:33.035776  992635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:34:33.035886  992635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:34:33.035993  992635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:34:33.036086  992635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:34:33.037424  992635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:34:33.037490  992635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:34:33.037563  992635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:34:33.037649  992635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:34:33.037695  992635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:34:33.037750  992635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:34:33.105282  992635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:34:33.414668  992635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:34:33.727680  992635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:34:33.812741  992635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:34:33.984459  992635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:34:33.985140  992635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:34:33.988084  992635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:34:30.184978  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:32.185137  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:31.974997  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:31.975024  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.488614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:34.506548  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:34.506624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:34.563005  993585 cri.go:89] found id: ""
	I0120 12:34:34.563039  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.563052  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:34.563060  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:34.563124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:34.594244  993585 cri.go:89] found id: ""
	I0120 12:34:34.594284  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.594296  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:34.594304  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:34.594373  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:34.625619  993585 cri.go:89] found id: ""
	I0120 12:34:34.625654  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.625665  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:34.625673  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:34.625738  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:34.658589  993585 cri.go:89] found id: ""
	I0120 12:34:34.658619  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.658627  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:34.658635  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:34.658703  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:34.689254  993585 cri.go:89] found id: ""
	I0120 12:34:34.689283  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.689294  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:34.689301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:34.689361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:34.718991  993585 cri.go:89] found id: ""
	I0120 12:34:34.719017  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.719025  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:34.719032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:34.719087  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:34.755470  993585 cri.go:89] found id: ""
	I0120 12:34:34.755506  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.755517  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:34.755525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:34.755591  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:34.794468  993585 cri.go:89] found id: ""
	I0120 12:34:34.794511  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.794536  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:34.794550  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:34.794567  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:34.872224  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:34.872255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:34.906752  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:34.906782  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:34.958387  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:34.958418  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.970224  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:34.970247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:35.042447  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:33.990145  992635 out.go:235]   - Booting up control plane ...
	I0120 12:34:33.990278  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:34:33.990399  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:34:33.990496  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:34:34.010394  992635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:34:34.017815  992635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:34:34.017877  992635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:34:34.137419  992635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:34:34.137546  992635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:34:35.139769  992635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002196985s
	I0120 12:34:35.139867  992635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:34:34.685113  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:36.685852  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.185481  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.641165  992635 kubeadm.go:310] [api-check] The API server is healthy after 4.501397328s
	I0120 12:34:39.658614  992635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:34:40.171926  992635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:34:40.198719  992635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:34:40.198914  992635 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-987349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:34:40.207929  992635 kubeadm.go:310] [bootstrap-token] Using token: n4uhes.3ig136bhcqw1unce
	I0120 12:34:40.209373  992635 out.go:235]   - Configuring RBAC rules ...
	I0120 12:34:40.209504  992635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:34:40.213198  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:34:40.219884  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:34:40.223154  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:34:40.228539  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:34:40.232011  992635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:34:40.369420  992635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:34:40.817626  992635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:34:41.370167  992635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:34:41.371275  992635 kubeadm.go:310] 
	I0120 12:34:41.371411  992635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:34:41.371436  992635 kubeadm.go:310] 
	I0120 12:34:41.371547  992635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:34:41.371567  992635 kubeadm.go:310] 
	I0120 12:34:41.371607  992635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:34:41.371696  992635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:34:41.371776  992635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:34:41.371785  992635 kubeadm.go:310] 
	I0120 12:34:41.371870  992635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:34:41.371879  992635 kubeadm.go:310] 
	I0120 12:34:41.371946  992635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:34:41.371956  992635 kubeadm.go:310] 
	I0120 12:34:41.372030  992635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:34:41.372156  992635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:34:41.372262  992635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:34:41.372278  992635 kubeadm.go:310] 
	I0120 12:34:41.372392  992635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:34:41.372498  992635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:34:41.372507  992635 kubeadm.go:310] 
	I0120 12:34:41.372606  992635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.372783  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:34:41.372829  992635 kubeadm.go:310] 	--control-plane 
	I0120 12:34:41.372852  992635 kubeadm.go:310] 
	I0120 12:34:41.372972  992635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:34:41.372985  992635 kubeadm.go:310] 
	I0120 12:34:41.373076  992635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.373204  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:34:41.373662  992635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:34:41.373689  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:34:41.373703  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:34:41.375374  992635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:34:37.542589  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:37.559095  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:37.559165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:37.598316  993585 cri.go:89] found id: ""
	I0120 12:34:37.598348  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.598360  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:37.598369  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:37.598438  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:37.628599  993585 cri.go:89] found id: ""
	I0120 12:34:37.628633  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.628645  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:37.628652  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:37.628727  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:37.668373  993585 cri.go:89] found id: ""
	I0120 12:34:37.668415  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.668428  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:37.668436  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:37.668505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:37.708471  993585 cri.go:89] found id: ""
	I0120 12:34:37.708506  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.708517  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:37.708525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:37.708586  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:37.741568  993585 cri.go:89] found id: ""
	I0120 12:34:37.741620  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.741639  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:37.741647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:37.741722  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:37.774368  993585 cri.go:89] found id: ""
	I0120 12:34:37.774396  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.774406  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:37.774414  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:37.774482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:37.806996  993585 cri.go:89] found id: ""
	I0120 12:34:37.807031  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.807042  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:37.807050  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:37.807111  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:37.843251  993585 cri.go:89] found id: ""
	I0120 12:34:37.843285  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.843296  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:37.843317  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:37.843336  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:37.918915  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:37.918937  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:37.918949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:38.003693  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:38.003735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:38.044200  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:38.044228  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:38.098358  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:38.098396  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.611766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:40.625430  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:40.625513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:40.662291  993585 cri.go:89] found id: ""
	I0120 12:34:40.662328  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.662340  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:40.662348  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:40.662416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:40.700505  993585 cri.go:89] found id: ""
	I0120 12:34:40.700535  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.700543  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:40.700549  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:40.700621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:40.740098  993585 cri.go:89] found id: ""
	I0120 12:34:40.740156  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.740168  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:40.740177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:40.740246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:40.779511  993585 cri.go:89] found id: ""
	I0120 12:34:40.779538  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.779547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:40.779552  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:40.779602  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:40.814466  993585 cri.go:89] found id: ""
	I0120 12:34:40.814508  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.814539  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:40.814549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:40.814624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:40.848198  993585 cri.go:89] found id: ""
	I0120 12:34:40.848224  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.848233  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:40.848239  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:40.848295  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:40.881226  993585 cri.go:89] found id: ""
	I0120 12:34:40.881260  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.881273  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:40.881281  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:40.881345  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:40.914605  993585 cri.go:89] found id: ""
	I0120 12:34:40.914639  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.914649  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:40.914659  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:40.914671  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:40.967363  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:40.967401  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.981622  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:40.981655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:41.052041  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:41.052074  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:41.052089  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:41.136661  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:41.136699  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:41.376667  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:34:41.387591  992635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:34:41.405656  992635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:34:41.405748  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.405779  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-987349 minikube.k8s.io/updated_at=2025_01_20T12_34_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-987349 minikube.k8s.io/primary=true
	I0120 12:34:41.445579  992635 ops.go:34] apiserver oom_adj: -16
	I0120 12:34:41.593723  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:42.093899  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.685860  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:43.685895  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:42.593991  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.093847  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.594692  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.094458  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.594425  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.094074  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.201304  992635 kubeadm.go:1113] duration metric: took 3.795623962s to wait for elevateKubeSystemPrivileges
	I0120 12:34:45.201350  992635 kubeadm.go:394] duration metric: took 5m3.346037476s to StartCluster
	I0120 12:34:45.201376  992635 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.201474  992635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:34:45.204831  992635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.205103  992635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:34:45.205287  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:34:45.205236  992635 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:34:45.205342  992635 addons.go:69] Setting dashboard=true in profile "embed-certs-987349"
	I0120 12:34:45.205370  992635 addons.go:238] Setting addon dashboard=true in "embed-certs-987349"
	I0120 12:34:45.205355  992635 addons.go:69] Setting default-storageclass=true in profile "embed-certs-987349"
	I0120 12:34:45.205338  992635 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-987349"
	I0120 12:34:45.205375  992635 addons.go:69] Setting metrics-server=true in profile "embed-certs-987349"
	I0120 12:34:45.205395  992635 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-987349"
	W0120 12:34:45.205403  992635 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:34:45.205413  992635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-987349"
	I0120 12:34:45.205443  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205383  992635 addons.go:247] addon dashboard should already be in state true
	I0120 12:34:45.205402  992635 addons.go:238] Setting addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:45.205522  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205537  992635 addons.go:247] addon metrics-server should already be in state true
	I0120 12:34:45.205585  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.205843  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205869  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205889  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205900  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205939  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205984  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205987  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.206010  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.206677  992635 out.go:177] * Verifying Kubernetes components...
	I0120 12:34:45.208137  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:34:45.222507  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0120 12:34:45.222862  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0120 12:34:45.223151  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.223444  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0120 12:34:45.223795  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.223818  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.223841  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.224249  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224372  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.224394  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.224716  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0120 12:34:45.224739  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224840  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.224881  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225063  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225306  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.225342  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225362  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225864  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.225848  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.226299  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226361  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226579  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.226996  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.227044  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.230457  992635 addons.go:238] Setting addon default-storageclass=true in "embed-certs-987349"
	W0120 12:34:45.230485  992635 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:34:45.230516  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.230928  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.230994  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.245536  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0120 12:34:45.246137  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.246774  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.246800  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.246874  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0120 12:34:45.247488  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.247514  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247491  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0120 12:34:45.247884  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247991  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.248377  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248398  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.248650  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248676  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.249046  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249050  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249260  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.249453  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.250058  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.250219  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0120 12:34:45.250876  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.251417  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.251442  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.251975  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.252485  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.252527  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.252582  992635 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:34:45.252806  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253386  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253969  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:34:45.253998  992635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:34:45.254019  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.254034  992635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:34:45.254933  992635 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:34:45.255880  992635 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.255900  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:34:45.255918  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.258271  992635 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:34:43.674682  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:43.690652  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:43.690723  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:43.721291  993585 cri.go:89] found id: ""
	I0120 12:34:43.721323  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.721334  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:43.721342  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:43.721410  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:43.752041  993585 cri.go:89] found id: ""
	I0120 12:34:43.752065  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.752072  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:43.752078  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:43.752138  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:43.785868  993585 cri.go:89] found id: ""
	I0120 12:34:43.785901  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.785913  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:43.785920  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:43.785989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:43.815950  993585 cri.go:89] found id: ""
	I0120 12:34:43.815981  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.815991  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:43.815998  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:43.816051  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:43.846957  993585 cri.go:89] found id: ""
	I0120 12:34:43.846989  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.846998  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:43.847006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:43.847063  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:43.879933  993585 cri.go:89] found id: ""
	I0120 12:34:43.879961  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.879971  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:43.879979  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:43.880037  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:43.910895  993585 cri.go:89] found id: ""
	I0120 12:34:43.910922  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.910932  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:43.910940  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:43.911004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:43.940052  993585 cri.go:89] found id: ""
	I0120 12:34:43.940083  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.940092  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:43.940103  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:43.940119  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:43.992764  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:43.992797  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:44.004467  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:44.004489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:44.076395  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:44.076424  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:44.076440  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:44.155006  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:44.155051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:46.706685  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:46.720910  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:46.720986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:46.769398  993585 cri.go:89] found id: ""
	I0120 12:34:46.769438  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.769452  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:46.769461  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:46.769532  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:46.812658  993585 cri.go:89] found id: ""
	I0120 12:34:46.812692  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.812704  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:46.812712  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:46.812780  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:46.849224  993585 cri.go:89] found id: ""
	I0120 12:34:46.849260  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.849271  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:46.849278  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:46.849340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:46.880621  993585 cri.go:89] found id: ""
	I0120 12:34:46.880660  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.880672  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:46.880680  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:46.880754  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:46.917825  993585 cri.go:89] found id: ""
	I0120 12:34:46.917860  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.917872  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:46.917880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:46.917948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:46.953069  993585 cri.go:89] found id: ""
	I0120 12:34:46.953102  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.953114  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:46.953122  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:46.953210  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:45.258378  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.258973  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.259074  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.259447  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.259546  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:34:45.259555  992635 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:34:45.259566  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.259650  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.260023  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.260165  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.260401  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.260819  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.260837  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.261018  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.261123  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.261371  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.261498  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.263039  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263451  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.263466  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263718  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.263876  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.264027  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.264247  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.271639  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0120 12:34:45.272049  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.272492  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.272506  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.272861  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.273045  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.275220  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.275411  992635 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.275425  992635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:34:45.275441  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.278031  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278264  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.278284  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278459  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.278651  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.278797  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.278940  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.485223  992635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:34:45.512129  992635 node_ready.go:35] waiting up to 6m0s for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535766  992635 node_ready.go:49] node "embed-certs-987349" has status "Ready":"True"
	I0120 12:34:45.535800  992635 node_ready.go:38] duration metric: took 23.637811ms for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535816  992635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:45.546936  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:45.591884  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.672669  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:34:45.672696  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:34:45.706505  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:34:45.706552  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:34:45.719651  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:34:45.719685  992635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:34:45.797607  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.912193  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.912228  992635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:34:45.919037  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:34:45.919066  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:34:45.995504  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.999745  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:34:45.999769  992635 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:34:46.012312  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012340  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.012774  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.012805  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.012815  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012824  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.013169  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.013179  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.013190  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.039766  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.039787  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.040079  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.040141  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.040161  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.060472  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:34:46.060499  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:34:46.125182  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:34:46.125209  992635 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:34:46.163864  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:34:46.163897  992635 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:34:46.271512  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:34:46.271542  992635 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:34:46.315589  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:34:46.315615  992635 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:34:46.382800  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:46.382834  992635 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:34:46.471318  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:47.146418  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.348766384s)
	I0120 12:34:47.146477  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146493  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.146889  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.146910  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.146920  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146928  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.148865  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.148875  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.148885  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375249  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.379691916s)
	I0120 12:34:47.375330  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375349  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375787  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.375817  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375827  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375835  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375855  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.376085  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.376105  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.376121  992635 addons.go:479] Verifying addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:47.554735  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.098046  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626653683s)
	I0120 12:34:48.098124  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098144  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098568  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098628  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.098648  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098651  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:48.098663  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098945  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098958  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.100362  992635 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-987349 addons enable metrics-server
	
	I0120 12:34:48.101744  992635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:34:46.185138  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.185173  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:46.991590  993585 cri.go:89] found id: ""
	I0120 12:34:46.991624  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.991636  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:46.991643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:46.991709  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:47.026992  993585 cri.go:89] found id: ""
	I0120 12:34:47.027028  993585 logs.go:282] 0 containers: []
	W0120 12:34:47.027039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:47.027052  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:47.027070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:47.041560  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:47.041600  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:47.116950  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:47.116982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:47.116999  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:47.220147  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:47.220186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:47.261692  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:47.261735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:49.823725  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:49.837812  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:49.837891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:49.870910  993585 cri.go:89] found id: ""
	I0120 12:34:49.870942  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.870954  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:49.870974  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:49.871038  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:49.901938  993585 cri.go:89] found id: ""
	I0120 12:34:49.901971  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.901983  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:49.901991  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:49.902050  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:49.934859  993585 cri.go:89] found id: ""
	I0120 12:34:49.934895  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.934908  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:49.934916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:49.934978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:49.969109  993585 cri.go:89] found id: ""
	I0120 12:34:49.969144  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.969152  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:49.969159  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:49.969215  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:50.000593  993585 cri.go:89] found id: ""
	I0120 12:34:50.000624  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.000634  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:50.000644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:50.000704  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:50.031935  993585 cri.go:89] found id: ""
	I0120 12:34:50.031956  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.031963  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:50.031968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:50.032013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:50.066876  993585 cri.go:89] found id: ""
	I0120 12:34:50.066904  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.066914  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:50.066922  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:50.066980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:50.099413  993585 cri.go:89] found id: ""
	I0120 12:34:50.099440  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.099448  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:50.099458  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:50.099469  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:50.147538  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:50.147565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:50.159202  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:50.159227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:50.233169  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:50.233201  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:50.233218  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:50.313297  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:50.313331  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:48.102973  992635 addons.go:514] duration metric: took 2.897750546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:34:50.054643  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:50.685136  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:53.185766  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:52.849232  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:52.863600  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:52.863668  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:52.897114  993585 cri.go:89] found id: ""
	I0120 12:34:52.897146  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.897158  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:52.897166  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:52.897235  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:52.931572  993585 cri.go:89] found id: ""
	I0120 12:34:52.931608  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.931621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:52.931631  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:52.931699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:52.967427  993585 cri.go:89] found id: ""
	I0120 12:34:52.967464  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.967477  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:52.967485  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:52.967550  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:53.004996  993585 cri.go:89] found id: ""
	I0120 12:34:53.005036  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.005045  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:53.005052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:53.005130  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:53.042883  993585 cri.go:89] found id: ""
	I0120 12:34:53.042920  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.042932  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:53.042941  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:53.043012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:53.081504  993585 cri.go:89] found id: ""
	I0120 12:34:53.081548  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.081560  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:53.081569  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:53.081638  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:53.116486  993585 cri.go:89] found id: ""
	I0120 12:34:53.116526  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.116537  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:53.116546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:53.116621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:53.150011  993585 cri.go:89] found id: ""
	I0120 12:34:53.150044  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.150055  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:53.150068  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:53.150082  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:53.236271  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:53.236314  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:53.272793  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:53.272823  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:53.328164  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:53.328210  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:53.342124  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:53.342159  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:53.436951  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:55.938662  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:55.954006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:55.954080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:55.995805  993585 cri.go:89] found id: ""
	I0120 12:34:55.995836  993585 logs.go:282] 0 containers: []
	W0120 12:34:55.995847  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:55.995855  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:55.995922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:56.037391  993585 cri.go:89] found id: ""
	I0120 12:34:56.037422  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.037431  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:56.037440  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:56.037500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:56.073395  993585 cri.go:89] found id: ""
	I0120 12:34:56.073432  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.073444  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:56.073452  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:56.073521  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:56.113060  993585 cri.go:89] found id: ""
	I0120 12:34:56.113095  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.113106  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:56.113114  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:56.113192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:56.149448  993585 cri.go:89] found id: ""
	I0120 12:34:56.149481  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.149492  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:56.149501  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:56.149565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:56.188193  993585 cri.go:89] found id: ""
	I0120 12:34:56.188222  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.188232  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:56.188241  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:56.188305  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:56.229490  993585 cri.go:89] found id: ""
	I0120 12:34:56.229520  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.229530  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:56.229538  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:56.229596  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:56.268312  993585 cri.go:89] found id: ""
	I0120 12:34:56.268342  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.268355  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:56.268368  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:56.268382  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:56.362946  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:56.362970  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:56.362987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:56.449009  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:56.449049  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:56.497349  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:56.497393  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:56.552829  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:56.552864  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:52.555092  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.555118  992635 pod_ready.go:82] duration metric: took 7.008153036s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.555129  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559701  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.559730  992635 pod_ready.go:82] duration metric: took 4.593756ms for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559743  992635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564650  992635 pod_ready.go:93] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.564677  992635 pod_ready.go:82] duration metric: took 4.924851ms for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564690  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568924  992635 pod_ready.go:93] pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.568947  992635 pod_ready.go:82] duration metric: took 4.248574ms for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568959  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573555  992635 pod_ready.go:93] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.573574  992635 pod_ready.go:82] duration metric: took 4.607213ms for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573582  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951750  992635 pod_ready.go:93] pod "kube-proxy-xrg5x" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.951777  992635 pod_ready.go:82] duration metric: took 378.189084ms for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951787  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352358  992635 pod_ready.go:93] pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:53.352397  992635 pod_ready.go:82] duration metric: took 400.600706ms for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352410  992635 pod_ready.go:39] duration metric: took 7.816579945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:53.352431  992635 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:53.352497  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:53.385445  992635 api_server.go:72] duration metric: took 8.18029522s to wait for apiserver process to appear ...
	I0120 12:34:53.385483  992635 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:53.385512  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:34:53.390273  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 200:
	ok
	I0120 12:34:53.391546  992635 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:53.391569  992635 api_server.go:131] duration metric: took 6.078483ms to wait for apiserver health ...
	I0120 12:34:53.391576  992635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:53.555192  992635 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:53.555222  992635 system_pods.go:61] "coredns-668d6bf9bc-cf5ts" [91648c6f-7cef-427f-82f3-7572a9b5d80e] Running
	I0120 12:34:53.555227  992635 system_pods.go:61] "coredns-668d6bf9bc-gr6pw" [6ff16a87-0a5e-4d82-b13d-2c72afef6dc0] Running
	I0120 12:34:53.555231  992635 system_pods.go:61] "etcd-embed-certs-987349" [5a54b1fe-f8d1-43c6-a430-a37fa3fa04b7] Running
	I0120 12:34:53.555235  992635 system_pods.go:61] "kube-apiserver-embed-certs-987349" [3e1da80d-0a1d-44bb-945d-534b91eebb95] Running
	I0120 12:34:53.555241  992635 system_pods.go:61] "kube-controller-manager-embed-certs-987349" [e1f4800a-ff08-4ea5-8134-81130f2d8f3d] Running
	I0120 12:34:53.555245  992635 system_pods.go:61] "kube-proxy-xrg5x" [a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7] Running
	I0120 12:34:53.555248  992635 system_pods.go:61] "kube-scheduler-embed-certs-987349" [d35e4dae-055f-4db7-b807-5767fa324498] Running
	I0120 12:34:53.555257  992635 system_pods.go:61] "metrics-server-f79f97bbb-4vcgc" [2108ac96-d8cd-429f-ac2d-babc6d97267b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:53.555262  992635 system_pods.go:61] "storage-provisioner" [953b33a8-d2a0-447d-a01b-49350c6555f7] Running
	I0120 12:34:53.555270  992635 system_pods.go:74] duration metric: took 163.687709ms to wait for pod list to return data ...
	I0120 12:34:53.555281  992635 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:53.753014  992635 default_sa.go:45] found service account: "default"
	I0120 12:34:53.753053  992635 default_sa.go:55] duration metric: took 197.764358ms for default service account to be created ...
	I0120 12:34:53.753066  992635 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:53.953127  992635 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:55.685957  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:57.679747  993131 pod_ready.go:82] duration metric: took 4m0.000931966s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:57.679804  993131 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:57.679835  993131 pod_ready.go:39] duration metric: took 4m14.541139208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:57.679882  993131 kubeadm.go:597] duration metric: took 4m22.782450691s to restartPrimaryControlPlane
	W0120 12:34:57.679976  993131 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:57.680017  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:59.068750  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:59.085643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:59.085720  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:59.128466  993585 cri.go:89] found id: ""
	I0120 12:34:59.128566  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.128584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:59.128594  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:59.128677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:59.175838  993585 cri.go:89] found id: ""
	I0120 12:34:59.175873  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.175885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:59.175893  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:59.175961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:59.211334  993585 cri.go:89] found id: ""
	I0120 12:34:59.211371  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.211383  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:59.211392  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:59.211466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:59.248992  993585 cri.go:89] found id: ""
	I0120 12:34:59.249031  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.249043  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:59.249060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:59.249127  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:59.285229  993585 cri.go:89] found id: ""
	I0120 12:34:59.285266  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.285279  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:59.285288  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:59.285367  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:59.323049  993585 cri.go:89] found id: ""
	I0120 12:34:59.323081  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.323092  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:59.323099  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:59.323180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:59.365925  993585 cri.go:89] found id: ""
	I0120 12:34:59.365968  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.365978  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:59.365985  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:59.366060  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:59.406489  993585 cri.go:89] found id: ""
	I0120 12:34:59.406540  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.406553  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:59.406565  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:59.406579  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:59.477858  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:59.477896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:59.494617  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:59.494658  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:59.572132  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:59.572160  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:59.572178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:59.668424  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:59.668471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:02.212721  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:02.227926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:02.228019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:02.266386  993585 cri.go:89] found id: ""
	I0120 12:35:02.266431  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.266444  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:02.266454  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:02.266541  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:02.301567  993585 cri.go:89] found id: ""
	I0120 12:35:02.301595  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.301607  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:02.301615  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:02.301678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:02.338717  993585 cri.go:89] found id: ""
	I0120 12:35:02.338758  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.338770  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:02.338778  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:02.338847  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:02.373953  993585 cri.go:89] found id: ""
	I0120 12:35:02.373990  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.374004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:02.374014  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:02.374113  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:02.406791  993585 cri.go:89] found id: ""
	I0120 12:35:02.406828  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.406839  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:02.406845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:02.406897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:02.443578  993585 cri.go:89] found id: ""
	I0120 12:35:02.443609  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.443617  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:02.443626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:02.443676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:02.477334  993585 cri.go:89] found id: ""
	I0120 12:35:02.477374  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.477387  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:02.477395  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:02.477468  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:02.511320  993585 cri.go:89] found id: ""
	I0120 12:35:02.511347  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.511357  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:02.511368  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:02.511379  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:02.563616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:02.563655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:02.589388  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:02.589428  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:02.668649  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:02.668676  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:02.668690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:02.754754  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:02.754788  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:05.298701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:05.312912  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:05.312991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:05.345040  993585 cri.go:89] found id: ""
	I0120 12:35:05.345073  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.345082  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:05.345095  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:05.345166  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:05.378693  993585 cri.go:89] found id: ""
	I0120 12:35:05.378728  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.378739  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:05.378747  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:05.378802  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:05.411600  993585 cri.go:89] found id: ""
	I0120 12:35:05.411628  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.411636  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:05.411642  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:05.411693  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:05.444416  993585 cri.go:89] found id: ""
	I0120 12:35:05.444445  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.444453  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:05.444461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:05.444525  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:05.475125  993585 cri.go:89] found id: ""
	I0120 12:35:05.475158  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.475171  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:05.475177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:05.475246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:05.508163  993585 cri.go:89] found id: ""
	I0120 12:35:05.508194  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.508207  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:05.508215  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:05.508278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:05.543703  993585 cri.go:89] found id: ""
	I0120 12:35:05.543737  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.543745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:05.543751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:05.543819  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:05.579560  993585 cri.go:89] found id: ""
	I0120 12:35:05.579594  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.579606  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:05.579620  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:05.579634  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:05.632935  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:05.632986  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:05.645983  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:05.646012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:05.719551  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:05.719582  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:05.719599  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:05.799242  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:05.799283  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.344816  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:08.358927  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:08.359006  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:08.393237  993585 cri.go:89] found id: ""
	I0120 12:35:08.393265  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.393274  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:08.393280  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:08.393333  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:08.432032  993585 cri.go:89] found id: ""
	I0120 12:35:08.432061  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.432069  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:08.432077  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:08.432155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:08.465329  993585 cri.go:89] found id: ""
	I0120 12:35:08.465357  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.465366  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:08.465375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:08.465450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:08.498889  993585 cri.go:89] found id: ""
	I0120 12:35:08.498932  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.498944  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:08.498952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:08.499034  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:08.533799  993585 cri.go:89] found id: ""
	I0120 12:35:08.533827  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.533836  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:08.533842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:08.533898  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:08.569072  993585 cri.go:89] found id: ""
	I0120 12:35:08.569109  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.569121  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:08.569129  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:08.569190  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:08.602775  993585 cri.go:89] found id: ""
	I0120 12:35:08.602815  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.602828  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:08.602836  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:08.602899  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:08.637207  993585 cri.go:89] found id: ""
	I0120 12:35:08.637242  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.637253  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:08.637266  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:08.637281  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:08.650046  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:08.650077  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:08.717640  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:08.717668  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:08.717682  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:08.795565  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:08.795605  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.832910  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:08.832951  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.391198  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:11.404454  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:11.404548  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:11.438901  993585 cri.go:89] found id: ""
	I0120 12:35:11.438942  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.438951  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:11.438959  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:11.439028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:11.475199  993585 cri.go:89] found id: ""
	I0120 12:35:11.475228  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.475237  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:11.475243  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:11.475304  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:11.507984  993585 cri.go:89] found id: ""
	I0120 12:35:11.508029  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.508041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:11.508052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:11.508145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:11.544131  993585 cri.go:89] found id: ""
	I0120 12:35:11.544162  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.544170  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:11.544176  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:11.544229  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:11.585316  993585 cri.go:89] found id: ""
	I0120 12:35:11.585353  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.585364  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:11.585370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:11.585424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:11.621531  993585 cri.go:89] found id: ""
	I0120 12:35:11.621565  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.621578  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:11.621587  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:11.621644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:11.653882  993585 cri.go:89] found id: ""
	I0120 12:35:11.653915  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.653926  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:11.653935  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:11.654005  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:11.686715  993585 cri.go:89] found id: ""
	I0120 12:35:11.686751  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.686763  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:11.686777  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:11.686792  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:11.766495  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:11.766550  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:11.805907  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:11.805944  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.854399  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:11.854435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:11.867131  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:11.867168  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:11.930826  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.431154  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:14.444170  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:14.444252  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:14.478030  993585 cri.go:89] found id: ""
	I0120 12:35:14.478067  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.478077  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:14.478083  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:14.478148  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:14.510821  993585 cri.go:89] found id: ""
	I0120 12:35:14.510855  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.510867  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:14.510874  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:14.510942  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:14.543080  993585 cri.go:89] found id: ""
	I0120 12:35:14.543136  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.543149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:14.543157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:14.543214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:14.579258  993585 cri.go:89] found id: ""
	I0120 12:35:14.579293  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.579302  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:14.579308  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:14.579361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:14.617149  993585 cri.go:89] found id: ""
	I0120 12:35:14.617187  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.617198  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:14.617206  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:14.617278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:14.650716  993585 cri.go:89] found id: ""
	I0120 12:35:14.650754  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.650793  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:14.650803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:14.650874  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:14.685987  993585 cri.go:89] found id: ""
	I0120 12:35:14.686018  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.686026  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:14.686032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:14.686084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:14.736332  993585 cri.go:89] found id: ""
	I0120 12:35:14.736370  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.736378  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:14.736389  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:14.736406  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:14.789693  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:14.789734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:14.818344  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:14.818376  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:14.891944  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.891974  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:14.891990  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:14.969846  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:14.969888  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:17.512148  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:17.525055  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:17.525143  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:17.559502  993585 cri.go:89] found id: ""
	I0120 12:35:17.559539  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.559550  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:17.559563  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:17.559624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:17.596133  993585 cri.go:89] found id: ""
	I0120 12:35:17.596170  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.596182  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:17.596190  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:17.596258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:17.632458  993585 cri.go:89] found id: ""
	I0120 12:35:17.632511  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.632526  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:17.632535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:17.632614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:17.666860  993585 cri.go:89] found id: ""
	I0120 12:35:17.666891  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.666899  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:17.666905  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:17.666959  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:17.701282  993585 cri.go:89] found id: ""
	I0120 12:35:17.701309  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.701318  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:17.701325  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:17.701384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:17.733358  993585 cri.go:89] found id: ""
	I0120 12:35:17.733391  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.733399  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:17.733406  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:17.733460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:17.769630  993585 cri.go:89] found id: ""
	I0120 12:35:17.769661  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.769670  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:17.769677  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:17.769731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:17.801855  993585 cri.go:89] found id: ""
	I0120 12:35:17.801894  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.801906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:17.801920  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:17.801935  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:17.852827  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:17.852869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:17.866559  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:17.866589  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:17.937036  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:17.937058  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:17.937078  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:18.011449  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:18.011482  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.551859  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:20.564461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:20.564522  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:20.599674  993585 cri.go:89] found id: ""
	I0120 12:35:20.599700  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.599708  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:20.599713  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:20.599761  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:20.634303  993585 cri.go:89] found id: ""
	I0120 12:35:20.634330  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.634340  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:20.634347  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:20.634395  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:20.670501  993585 cri.go:89] found id: ""
	I0120 12:35:20.670552  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.670562  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:20.670568  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:20.670635  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:20.703603  993585 cri.go:89] found id: ""
	I0120 12:35:20.703627  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.703636  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:20.703644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:20.703699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:20.733456  993585 cri.go:89] found id: ""
	I0120 12:35:20.733490  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.733501  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:20.733509  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:20.733565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:20.764504  993585 cri.go:89] found id: ""
	I0120 12:35:20.764529  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.764539  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:20.764547  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:20.764608  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:20.796510  993585 cri.go:89] found id: ""
	I0120 12:35:20.796543  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.796553  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:20.796560  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:20.796623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:20.828114  993585 cri.go:89] found id: ""
	I0120 12:35:20.828147  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.828158  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:20.828170  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:20.828189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:20.889902  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:20.889933  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:20.889949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:20.962443  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:20.962471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.999767  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:20.999798  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:21.050810  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:21.050837  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.565446  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:23.577843  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:23.577912  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:23.612669  993585 cri.go:89] found id: ""
	I0120 12:35:23.612699  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.612710  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:23.612719  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:23.612787  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:23.646750  993585 cri.go:89] found id: ""
	I0120 12:35:23.646783  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.646793  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:23.646799  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:23.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:23.679879  993585 cri.go:89] found id: ""
	I0120 12:35:23.679907  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.679917  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:23.679925  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:23.679989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:23.713255  993585 cri.go:89] found id: ""
	I0120 12:35:23.713292  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.713301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:23.713307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:23.713358  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:23.742940  993585 cri.go:89] found id: ""
	I0120 12:35:23.742966  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.742974  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:23.742980  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:23.743029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:23.771816  993585 cri.go:89] found id: ""
	I0120 12:35:23.771846  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.771865  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:23.771871  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:23.771937  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:23.801508  993585 cri.go:89] found id: ""
	I0120 12:35:23.801536  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.801544  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:23.801549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:23.801606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:23.830867  993585 cri.go:89] found id: ""
	I0120 12:35:23.830897  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.830906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:23.830918  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:23.830934  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:23.882650  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:23.882678  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.895231  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:23.895260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:23.959418  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:23.959446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:23.959461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:24.036771  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:24.036802  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:26.577129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:26.594999  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:26.595084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:26.627078  993585 cri.go:89] found id: ""
	I0120 12:35:26.627114  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.627123  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:26.627129  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:26.627184  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:26.667285  993585 cri.go:89] found id: ""
	I0120 12:35:26.667317  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.667333  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:26.667340  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:26.667416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:26.704185  993585 cri.go:89] found id: ""
	I0120 12:35:26.704216  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.704227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:26.704235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:26.704296  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:26.738047  993585 cri.go:89] found id: ""
	I0120 12:35:26.738082  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.738108  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:26.738117  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:26.738183  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:26.768751  993585 cri.go:89] found id: ""
	I0120 12:35:26.768783  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.768794  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:26.768801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:26.768865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:26.799890  993585 cri.go:89] found id: ""
	I0120 12:35:26.799916  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.799924  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:26.799930  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:26.799980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:26.831879  993585 cri.go:89] found id: ""
	I0120 12:35:26.831910  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.831921  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:26.831929  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:26.831987  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:26.869231  993585 cri.go:89] found id: ""
	I0120 12:35:26.869264  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.869272  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:26.869282  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:26.869294  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:26.929958  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:26.929982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:26.929996  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:25.897831  993131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.217725548s)
	I0120 12:35:25.897928  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.911960  993131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:25.920888  993131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:25.929485  993131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:25.929507  993131 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:25.929555  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 12:35:25.937714  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:25.937770  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:25.946009  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 12:35:25.954472  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:25.954515  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:25.962622  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.970420  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:25.970466  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.978489  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 12:35:25.986579  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:25.986631  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:25.994935  993131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:26.145798  993131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:27.025154  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:27.025189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:27.073288  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:27.073333  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:27.124126  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:27.124156  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:29.638666  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:29.652209  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:29.652286  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:29.690747  993585 cri.go:89] found id: ""
	I0120 12:35:29.690777  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.690789  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:29.690796  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:29.690857  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:29.721866  993585 cri.go:89] found id: ""
	I0120 12:35:29.721896  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.721907  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:29.721915  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:29.721978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:29.757564  993585 cri.go:89] found id: ""
	I0120 12:35:29.757596  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.757628  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:29.757637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:29.757712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:29.790677  993585 cri.go:89] found id: ""
	I0120 12:35:29.790709  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.790720  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:29.790728  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:29.790791  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:29.826917  993585 cri.go:89] found id: ""
	I0120 12:35:29.826953  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.826965  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:29.826974  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:29.827039  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:29.861866  993585 cri.go:89] found id: ""
	I0120 12:35:29.861897  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.861908  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:29.861916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:29.861973  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:29.895508  993585 cri.go:89] found id: ""
	I0120 12:35:29.895543  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.895554  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:29.895563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:29.895623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:29.927907  993585 cri.go:89] found id: ""
	I0120 12:35:29.927939  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.927949  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:29.927961  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:29.927976  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:29.968111  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:29.968149  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:30.038475  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:30.038529  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:30.051650  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:30.051679  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:30.117850  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:30.117880  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:30.117896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:34.909127  993131 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:34.909216  993131 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:34.909344  993131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:34.909477  993131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:34.909620  993131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:34.909715  993131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:34.911105  993131 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:34.911202  993131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:34.911293  993131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.911398  993131 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:34.911468  993131 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:34.911533  993131 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:34.911590  993131 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:34.911674  993131 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:34.911735  993131 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:34.911828  993131 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:34.911943  993131 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:34.912009  993131 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:34.912100  993131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:34.912190  993131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:34.912286  993131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:34.912332  993131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:34.912438  993131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:34.912528  993131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:34.912635  993131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:34.912726  993131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:34.914123  993131 out.go:235]   - Booting up control plane ...
	I0120 12:35:34.914234  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:34.914348  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:34.914449  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:34.914608  993131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:34.914688  993131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:34.914725  993131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:34.914857  993131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:34.914944  993131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:34.915002  993131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.58459ms
	I0120 12:35:34.915062  993131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:34.915123  993131 kubeadm.go:310] [api-check] The API server is healthy after 5.503412907s
	I0120 12:35:34.915262  993131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:34.915400  993131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:34.915458  993131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:34.915633  993131 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-981597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:34.915681  993131 kubeadm.go:310] [bootstrap-token] Using token: i0tzs5.z567f1ntzr02cqfq
	I0120 12:35:34.916955  993131 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:34.917087  993131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:34.917200  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:34.917374  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:34.917519  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:34.917673  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:34.917794  993131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:34.917950  993131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:34.918013  993131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:34.918074  993131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:34.918083  993131 kubeadm.go:310] 
	I0120 12:35:34.918237  993131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:34.918260  993131 kubeadm.go:310] 
	I0120 12:35:34.918376  993131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:34.918388  993131 kubeadm.go:310] 
	I0120 12:35:34.918425  993131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:34.918506  993131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:34.918601  993131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:34.918613  993131 kubeadm.go:310] 
	I0120 12:35:34.918694  993131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:34.918704  993131 kubeadm.go:310] 
	I0120 12:35:34.918758  993131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:34.918770  993131 kubeadm.go:310] 
	I0120 12:35:34.918843  993131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:34.918947  993131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:34.919045  993131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:34.919057  993131 kubeadm.go:310] 
	I0120 12:35:34.919174  993131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:34.919281  993131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:34.919295  993131 kubeadm.go:310] 
	I0120 12:35:34.919404  993131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919548  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:35:34.919582  993131 kubeadm.go:310] 	--control-plane 
	I0120 12:35:34.919594  993131 kubeadm.go:310] 
	I0120 12:35:34.919711  993131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:34.919723  993131 kubeadm.go:310] 
	I0120 12:35:34.919827  993131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919982  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:35:34.919999  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:35:34.920015  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:35:34.921475  993131 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:35:32.712573  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:32.725809  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:32.725886  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:32.761768  993585 cri.go:89] found id: ""
	I0120 12:35:32.761803  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.761812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:32.761818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:32.761875  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:32.797578  993585 cri.go:89] found id: ""
	I0120 12:35:32.797610  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.797621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:32.797628  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:32.797694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:32.834493  993585 cri.go:89] found id: ""
	I0120 12:35:32.834539  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.834552  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:32.834559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:32.834644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:32.870730  993585 cri.go:89] found id: ""
	I0120 12:35:32.870762  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.870774  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:32.870782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:32.870851  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:32.913904  993585 cri.go:89] found id: ""
	I0120 12:35:32.913932  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.913943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:32.913951  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:32.914019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:32.955928  993585 cri.go:89] found id: ""
	I0120 12:35:32.955961  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.955972  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:32.955981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:32.956044  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:33.001075  993585 cri.go:89] found id: ""
	I0120 12:35:33.001116  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.001129  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:33.001138  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:33.001209  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:33.035918  993585 cri.go:89] found id: ""
	I0120 12:35:33.035954  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.035961  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:33.035971  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:33.035981  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:33.090782  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:33.090816  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:33.107144  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:33.107171  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:33.184808  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:33.184830  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:33.184845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:33.269131  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:33.269170  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:35.809619  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:35.822178  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:35.822254  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:35.862005  993585 cri.go:89] found id: ""
	I0120 12:35:35.862035  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.862042  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:35.862050  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:35.862110  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:35.896880  993585 cri.go:89] found id: ""
	I0120 12:35:35.896909  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.896920  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:35.896928  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:35.896995  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:35.931762  993585 cri.go:89] found id: ""
	I0120 12:35:35.931795  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.931806  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:35.931815  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:35.931882  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:35.965205  993585 cri.go:89] found id: ""
	I0120 12:35:35.965236  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.965246  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:35.965254  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:35.965310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:35.999903  993585 cri.go:89] found id: ""
	I0120 12:35:35.999926  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.999943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:35.999956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:36.000004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:36.033944  993585 cri.go:89] found id: ""
	I0120 12:35:36.033981  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.033992  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:36.034005  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:36.034073  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:36.066986  993585 cri.go:89] found id: ""
	I0120 12:35:36.067021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.067035  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:36.067043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:36.067108  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:36.096989  993585 cri.go:89] found id: ""
	I0120 12:35:36.097021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.097033  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:36.097047  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:36.097062  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:36.170812  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:36.170838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:36.208578  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:36.208619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:36.259448  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:36.259483  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:36.273938  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:36.273968  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:36.342621  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:34.922590  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:35:34.933756  993131 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:35:34.952622  993131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:34.952700  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:34.952763  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-981597 minikube.k8s.io/updated_at=2025_01_20T12_35_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=default-k8s-diff-port-981597 minikube.k8s.io/primary=true
	I0120 12:35:35.145316  993131 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:35.161459  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:35.662404  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.162367  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.662373  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.162163  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.661727  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.161998  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.662452  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.161911  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.336211  993131 kubeadm.go:1113] duration metric: took 4.383561407s to wait for elevateKubeSystemPrivileges
	I0120 12:35:39.336266  993131 kubeadm.go:394] duration metric: took 5m4.484253589s to StartCluster
	I0120 12:35:39.336293  993131 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.336426  993131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:35:39.338834  993131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.339088  993131 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:35:39.339220  993131 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:39.339332  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:35:39.339365  993131 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339391  993131 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-981597"
	I0120 12:35:39.339390  993131 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-981597"
	W0120 12:35:39.339401  993131 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:35:39.339408  993131 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339418  993131 addons.go:247] addon dashboard should already be in state true
	I0120 12:35:39.339411  993131 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339435  993131 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339444  993131 addons.go:247] addon metrics-server should already be in state true
	I0120 12:35:39.339444  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339451  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339474  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339390  993131 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339493  993131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-981597"
	I0120 12:35:39.339824  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339865  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339923  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340012  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.340084  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340125  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.343052  993131 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:39.344268  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:39.360766  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0120 12:35:39.360936  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0120 12:35:39.361027  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0120 12:35:39.361484  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361615  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361686  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361937  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.361959  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362058  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362066  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362167  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362178  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362512  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362592  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362613  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362835  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.363083  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.363147  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.363178  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0120 12:35:39.363870  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.364373  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.364508  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.364871  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.364893  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.365250  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.365757  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.365799  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.366758  993131 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.366781  993131 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:35:39.366816  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.367172  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.367210  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.385700  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0120 12:35:39.386220  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.386752  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.386776  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.387167  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.387430  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.388835  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0120 12:35:39.389074  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0120 12:35:39.389290  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389718  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389796  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.389819  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390265  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.390287  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390316  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.390346  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.390828  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.391044  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.391081  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.392517  993131 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:35:39.392556  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0120 12:35:39.393043  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.393711  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.393715  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.393730  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.394195  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.394747  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.394793  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.395249  993131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:39.395355  993131 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:35:39.395403  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.396870  993131 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.396892  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:39.396914  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.396998  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:35:39.397017  993131 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:35:39.397039  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.399496  993131 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:35:38.843738  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:38.856444  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:38.856506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:38.892000  993585 cri.go:89] found id: ""
	I0120 12:35:38.892027  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.892037  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:38.892043  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:38.892093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:38.930509  993585 cri.go:89] found id: ""
	I0120 12:35:38.930558  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.930569  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:38.930577  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:38.930643  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:38.976632  993585 cri.go:89] found id: ""
	I0120 12:35:38.976675  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.976687  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:38.976695  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:38.976763  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:39.021957  993585 cri.go:89] found id: ""
	I0120 12:35:39.021993  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.022004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:39.022011  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:39.022080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:39.060311  993585 cri.go:89] found id: ""
	I0120 12:35:39.060352  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.060366  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:39.060375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:39.060441  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:39.097901  993585 cri.go:89] found id: ""
	I0120 12:35:39.097939  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.097952  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:39.097961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:39.098029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:39.135291  993585 cri.go:89] found id: ""
	I0120 12:35:39.135328  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.135341  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:39.135349  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:39.135415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:39.178737  993585 cri.go:89] found id: ""
	I0120 12:35:39.178775  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.178810  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:39.178822  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:39.178838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:39.228677  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:39.228723  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:39.281237  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:39.281274  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:39.298505  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:39.298554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:39.387325  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:39.387350  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:39.387364  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:39.400927  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:35:39.400947  993131 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:35:39.400969  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.401577  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401584  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401591  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401608  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401620  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401641  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401644  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401851  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.401948  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.402022  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402053  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402154  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.402468  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.404077  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.406625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.406703  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.406720  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.410708  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.410899  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.411057  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.414646  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0120 12:35:39.415080  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.415539  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.415560  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.415922  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.416132  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.417677  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.417895  993131 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.417909  993131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:39.417927  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.422636  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422665  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.422682  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422694  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.424675  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.424843  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.424988  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.601008  993131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:39.644654  993131 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675702  993131 node_ready.go:49] node "default-k8s-diff-port-981597" has status "Ready":"True"
	I0120 12:35:39.675723  993131 node_ready.go:38] duration metric: took 31.032591ms for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675734  993131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:39.685490  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:39.768195  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:35:39.768218  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:35:39.812873  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:35:39.812897  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:35:39.822881  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.825928  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.846613  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:35:39.846645  993131 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:35:39.883996  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:35:39.884037  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:35:39.935435  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:39.935470  993131 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:35:39.992813  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:35:39.992840  993131 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:35:40.026214  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:40.069154  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:35:40.069190  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:35:40.121948  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:35:40.121983  993131 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:35:40.243520  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:35:40.243553  993131 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:35:40.252481  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252512  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.252849  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.252872  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.252885  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252900  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.253335  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.253397  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.253372  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.257887  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.257903  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.258196  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.258214  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.295226  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:35:40.295255  993131 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:35:40.386270  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:35:40.386304  993131 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:35:40.478877  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.478909  993131 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:35:40.533601  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.863384  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.037420526s)
	I0120 12:35:40.863438  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863447  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.863790  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.863831  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.863841  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.863851  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863864  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.864124  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.864145  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.864150  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.207665  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181404643s)
	I0120 12:35:41.207727  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.207743  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208079  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208098  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208117  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.208126  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208422  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208445  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208445  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.208456  993131 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-981597"
	I0120 12:35:41.719786  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:41.719813  993131 pod_ready.go:82] duration metric: took 2.034287913s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.719823  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.984277  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.450618233s)
	I0120 12:35:41.984341  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984368  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984689  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.984706  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.984718  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984728  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984738  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985071  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985119  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.985138  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.986711  993131 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-981597 addons enable metrics-server
	
	I0120 12:35:41.988326  993131 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:35:41.989523  993131 addons.go:514] duration metric: took 2.650315965s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:35:43.726169  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:41.981886  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:41.996139  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:41.996203  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:42.028240  993585 cri.go:89] found id: ""
	I0120 12:35:42.028267  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.028279  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:42.028287  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:42.028351  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:42.063513  993585 cri.go:89] found id: ""
	I0120 12:35:42.063544  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.063553  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:42.063561  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:42.063622  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:42.095602  993585 cri.go:89] found id: ""
	I0120 12:35:42.095637  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.095648  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:42.095656  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:42.095712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:42.128427  993585 cri.go:89] found id: ""
	I0120 12:35:42.128460  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.128471  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:42.128478  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:42.128539  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:42.163430  993585 cri.go:89] found id: ""
	I0120 12:35:42.163462  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.163473  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:42.163487  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:42.163601  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:42.212225  993585 cri.go:89] found id: ""
	I0120 12:35:42.212251  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.212259  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:42.212265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:42.212326  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:42.251596  993585 cri.go:89] found id: ""
	I0120 12:35:42.251623  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.251631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:42.251637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:42.251697  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:42.288436  993585 cri.go:89] found id: ""
	I0120 12:35:42.288472  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.288485  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:42.288498  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:42.288514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:42.351809  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:42.351858  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:42.367697  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:42.367740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:42.445420  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:42.445452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:42.445470  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:42.529150  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:42.529190  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:45.068423  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:45.083648  993585 kubeadm.go:597] duration metric: took 4m4.248047549s to restartPrimaryControlPlane
	W0120 12:35:45.083733  993585 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:35:45.083773  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:35:48.615167  993585 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.531361181s)
	I0120 12:35:48.615262  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:48.629340  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:48.640853  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:48.653161  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:48.653181  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:48.653220  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:48.662422  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:48.662489  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:48.672006  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:48.681430  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:48.681493  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:48.690703  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.699479  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:48.699551  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.708576  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:48.717379  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:48.717440  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:48.727690  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:48.809089  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:35:48.809181  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:48.968180  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:48.968344  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:48.968503  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:35:49.164019  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:45.813799  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.227053  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.729367  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.729409  993131 pod_ready.go:82] duration metric: took 7.009577783s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.729423  993131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735596  993131 pod_ready.go:93] pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.735621  993131 pod_ready.go:82] duration metric: took 6.188248ms for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735635  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748236  993131 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.748262  993131 pod_ready.go:82] duration metric: took 12.618834ms for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748275  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758672  993131 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.758703  993131 pod_ready.go:82] duration metric: took 10.418952ms for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758717  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766403  993131 pod_ready.go:93] pod "kube-proxy-sn66t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.766423  993131 pod_ready.go:82] duration metric: took 7.698237ms for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766433  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124688  993131 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:49.124714  993131 pod_ready.go:82] duration metric: took 358.274237ms for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124723  993131 pod_ready.go:39] duration metric: took 9.44898025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:49.124740  993131 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:49.124803  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:49.172406  993131 api_server.go:72] duration metric: took 9.833266884s to wait for apiserver process to appear ...
	I0120 12:35:49.172434  993131 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:49.172459  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:35:49.177280  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0120 12:35:49.178469  993131 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:49.178498  993131 api_server.go:131] duration metric: took 6.05652ms to wait for apiserver health ...
	I0120 12:35:49.178508  993131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:49.166637  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:49.166743  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:49.166851  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:49.166969  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:49.167055  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:49.167163  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:49.167247  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:49.167333  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:49.167596  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:49.167953  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:49.168592  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:49.168717  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:49.168824  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:49.305660  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:49.652487  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:49.782615  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:49.921695  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:49.937706  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:49.939001  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:49.939074  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:50.070984  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:50.072848  993585 out.go:235]   - Booting up control plane ...
	I0120 12:35:50.072980  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:50.082351  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:50.082939  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:50.083932  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:50.088842  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:35:49.328775  993131 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:49.328811  993131 system_pods.go:61] "coredns-668d6bf9bc-cn8tc" [19a18120-8f3f-45bd-92f3-c291423f4895] Running
	I0120 12:35:49.328819  993131 system_pods.go:61] "coredns-668d6bf9bc-g9m4p" [9e3e4568-92ab-4ee5-b10a-5489b72248d6] Running
	I0120 12:35:49.328825  993131 system_pods.go:61] "etcd-default-k8s-diff-port-981597" [82f73dcc-1328-428e-8eb7-550c9b2d2b22] Running
	I0120 12:35:49.328831  993131 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-981597" [ff2d67bb-7ff8-44ac-a043-b6f423339fc7] Running
	I0120 12:35:49.328837  993131 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-981597" [fa91d7b8-200d-464f-b2b0-3a08a4f435d8] Running
	I0120 12:35:49.328842  993131 system_pods.go:61] "kube-proxy-sn66t" [a90855a0-c87a-4b55-bd0e-4b95b062479d] Running
	I0120 12:35:49.328847  993131 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-981597" [26bb9f8b-4e05-4cb9-a863-75d6a6a6b652] Running
	I0120 12:35:49.328856  993131 system_pods.go:61] "metrics-server-f79f97bbb-xkrxx" [cf78f231-b1e0-4566-817b-bfb9b8dac3f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:35:49.328862  993131 system_pods.go:61] "storage-provisioner" [e77b12e8-25f3-43ad-8588-2716dd4ccbd1] Running
	I0120 12:35:49.328876  993131 system_pods.go:74] duration metric: took 150.359796ms to wait for pod list to return data ...
	I0120 12:35:49.328889  993131 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:49.619916  993131 default_sa.go:45] found service account: "default"
	I0120 12:35:49.619954  993131 default_sa.go:55] duration metric: took 291.056324ms for default service account to be created ...
	I0120 12:35:49.619967  993131 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:49.728886  993131 system_pods.go:87] 9 kube-system pods found
	I0120 12:36:30.091045  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:36:30.091553  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:30.091777  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:35.092197  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:35.092442  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:45.093033  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:45.093302  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:05.094270  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:05.094487  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096146  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:45.096378  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096414  993585 kubeadm.go:310] 
	I0120 12:37:45.096477  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:37:45.096535  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:37:45.096547  993585 kubeadm.go:310] 
	I0120 12:37:45.096623  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:37:45.096688  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:37:45.096836  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:37:45.096847  993585 kubeadm.go:310] 
	I0120 12:37:45.096982  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:37:45.097022  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:37:45.097075  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:37:45.097088  993585 kubeadm.go:310] 
	I0120 12:37:45.097213  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:37:45.097323  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:37:45.097344  993585 kubeadm.go:310] 
	I0120 12:37:45.097440  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:37:45.097575  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:37:45.097684  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:37:45.097786  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:37:45.097798  993585 kubeadm.go:310] 
	I0120 12:37:45.098707  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:37:45.098836  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:37:45.098939  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:37:45.099133  993585 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:37:45.099186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:37:45.553353  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:37:45.568252  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:37:45.577030  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:37:45.577047  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:37:45.577084  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:37:45.585663  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:37:45.585715  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:37:45.594051  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:37:45.602109  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:37:45.602159  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:37:45.610431  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.619241  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:37:45.619279  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.627467  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:37:45.636457  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:37:45.636508  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:37:45.644627  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:37:45.711254  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:37:45.711363  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:37:45.852391  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:37:45.852543  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:37:45.852693  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:37:46.034483  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:37:46.036223  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:37:46.036346  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:37:46.036455  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:37:46.036570  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:37:46.036663  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:37:46.036789  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:37:46.036889  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:37:46.037251  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:37:46.037740  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:37:46.038025  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:37:46.038414  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:37:46.038478  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:37:46.038581  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:37:46.266444  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:37:46.393858  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:37:46.536948  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:37:46.765338  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:37:46.783975  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:37:46.785028  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:37:46.785076  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:37:46.920894  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:37:46.922757  993585 out.go:235]   - Booting up control plane ...
	I0120 12:37:46.922892  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:37:46.929056  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:37:46.933400  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:37:46.933527  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:37:46.939663  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:38:26.942147  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:38:26.942793  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:26.943016  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:31.943340  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:31.943563  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:41.944064  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:41.944316  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:01.944375  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:01.944608  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943032  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:41.943264  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943273  993585 kubeadm.go:310] 
	I0120 12:39:41.943326  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:39:41.943363  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:39:41.943383  993585 kubeadm.go:310] 
	I0120 12:39:41.943444  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:39:41.943506  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:39:41.943609  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:39:41.943617  993585 kubeadm.go:310] 
	I0120 12:39:41.943716  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:39:41.943762  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:39:41.943814  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:39:41.943826  993585 kubeadm.go:310] 
	I0120 12:39:41.943914  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:39:41.944033  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:39:41.944052  993585 kubeadm.go:310] 
	I0120 12:39:41.944219  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:39:41.944348  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:39:41.944450  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:39:41.944591  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:39:41.944613  993585 kubeadm.go:310] 
	I0120 12:39:41.945529  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:39:41.945621  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:39:41.945690  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:39:41.945758  993585 kubeadm.go:394] duration metric: took 8m1.157734369s to StartCluster
	I0120 12:39:41.945816  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:39:41.945871  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:39:41.989147  993585 cri.go:89] found id: ""
	I0120 12:39:41.989175  993585 logs.go:282] 0 containers: []
	W0120 12:39:41.989183  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:39:41.989188  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:39:41.989251  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:39:42.021608  993585 cri.go:89] found id: ""
	I0120 12:39:42.021631  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.021639  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:39:42.021646  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:39:42.021706  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:39:42.062565  993585 cri.go:89] found id: ""
	I0120 12:39:42.062592  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.062601  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:39:42.062607  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:39:42.062659  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:39:42.097040  993585 cri.go:89] found id: ""
	I0120 12:39:42.097067  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.097075  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:39:42.097081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:39:42.097144  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:39:42.128833  993585 cri.go:89] found id: ""
	I0120 12:39:42.128862  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.128873  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:39:42.128880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:39:42.128936  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:39:42.159564  993585 cri.go:89] found id: ""
	I0120 12:39:42.159596  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.159608  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:39:42.159616  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:39:42.159676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:39:42.189336  993585 cri.go:89] found id: ""
	I0120 12:39:42.189367  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.189378  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:39:42.189386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:39:42.189450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:39:42.228745  993585 cri.go:89] found id: ""
	I0120 12:39:42.228776  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.228787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:39:42.228801  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:39:42.228818  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:39:42.244466  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:39:42.244508  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:39:42.336809  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:39:42.336832  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:39:42.336844  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:39:42.443413  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:39:42.443445  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:39:42.481436  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:39:42.481466  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:39:42.533396  993585 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:39:42.533472  993585 out.go:270] * 
	W0120 12:39:42.533585  993585 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.533610  993585 out.go:270] * 
	W0120 12:39:42.534617  993585 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:39:42.537661  993585 out.go:201] 
	W0120 12:39:42.538809  993585 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.538865  993585 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:39:42.538897  993585 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:39:42.540269  993585 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.853662517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737376783853638991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5ee1f5b-d177-4bbf-a668-70c62fa2a533 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.854047454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4c427ae-fd69-4f0e-a922-82f6e7350eff name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.854133887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4c427ae-fd69-4f0e-a922-82f6e7350eff name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.854164911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d4c427ae-fd69-4f0e-a922-82f6e7350eff name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.883155475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7681e53e-223c-4b2f-8463-dbabea44fd28 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.883234514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7681e53e-223c-4b2f-8463-dbabea44fd28 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.884016480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a371950-1adc-4feb-881f-d09aa62d59e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.884421476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737376783884403995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a371950-1adc-4feb-881f-d09aa62d59e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.884965296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1d5d3d2-3ee9-487b-a4f0-15dbffd17d37 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.885032967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1d5d3d2-3ee9-487b-a4f0-15dbffd17d37 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.885111242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b1d5d3d2-3ee9-487b-a4f0-15dbffd17d37 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.911438104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8735b0dc-598e-4542-8751-03ed86f40b76 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.911506785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8735b0dc-598e-4542-8751-03ed86f40b76 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.912559038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c54f5ebe-4f5f-4e54-9922-e8670893b369 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.912881080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737376783912864541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c54f5ebe-4f5f-4e54-9922-e8670893b369 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.913515461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bada9392-cfcc-4836-bac6-40ffdf1accdb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.913584218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bada9392-cfcc-4836-bac6-40ffdf1accdb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.913617082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bada9392-cfcc-4836-bac6-40ffdf1accdb name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.947043897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0903f258-9313-4d49-9a27-397d1d5c7084 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.947160396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0903f258-9313-4d49-9a27-397d1d5c7084 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.948006798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b26add4-9a16-4b3a-a1eb-81d174c1aea0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.948429849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737376783948410113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b26add4-9a16-4b3a-a1eb-81d174c1aea0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.948912575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=094b7878-1390-482d-85b7-d1de4f90b413 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.948996636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=094b7878-1390-482d-85b7-d1de4f90b413 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:39:43 old-k8s-version-134433 crio[630]: time="2025-01-20 12:39:43.949027585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=094b7878-1390-482d-85b7-d1de4f90b413 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043464] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.939919] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.154572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498654] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.775976] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.069639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050163] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.195196] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.136181] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.241855] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.257251] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068017] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.557848] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.735598] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 12:35] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan20 12:37] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +0.069529] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:39:44 up 8 min,  0 users,  load average: 0.02, 0.12, 0.08
	Linux old-k8s-version-134433 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008646f0)
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000da7ef0, 0x4f0ac20, 0xc000747220, 0x1, 0xc00009e0c0)
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001650a0, 0xc00009e0c0)
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c7b9a0, 0xc0001bf4c0)
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 20 12:39:41 old-k8s-version-134433 kubelet[5570]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 20 12:39:41 old-k8s-version-134433 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 12:39:41 old-k8s-version-134433 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 12:39:42 old-k8s-version-134433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 20 12:39:42 old-k8s-version-134433 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 12:39:42 old-k8s-version-134433 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 12:39:42 old-k8s-version-134433 kubelet[5614]: I0120 12:39:42.311843    5614 server.go:416] Version: v1.20.0
	Jan 20 12:39:42 old-k8s-version-134433 kubelet[5614]: I0120 12:39:42.312048    5614 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 12:39:42 old-k8s-version-134433 kubelet[5614]: I0120 12:39:42.313756    5614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 12:39:42 old-k8s-version-134433 kubelet[5614]: W0120 12:39:42.317627    5614 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 12:39:42 old-k8s-version-134433 kubelet[5614]: I0120 12:39:42.317738    5614 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (246.454207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-134433" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (513.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:42:41.307950  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:44:04.380304  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:44:37.399851  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:47:41.308066  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (235.088887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-134433" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (239.116609ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25: (1.026109914s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-496524             | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-969801 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | disable-driver-mounts-969801                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:28 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-987349            | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-496524                  | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981597  | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:30 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-987349                 | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC | 20 Jan 25 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-134433        | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981597       | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC | 20 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC |                     |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-134433             | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:31:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:31:11.956010  993585 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:31:11.956137  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956148  993585 out.go:358] Setting ErrFile to fd 2...
	I0120 12:31:11.956152  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956366  993585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:31:11.956993  993585 out.go:352] Setting JSON to false
	I0120 12:31:11.958067  993585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18815,"bootTime":1737357457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:31:11.958186  993585 start.go:139] virtualization: kvm guest
	I0120 12:31:11.960398  993585 out.go:177] * [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:31:11.961613  993585 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:31:11.961713  993585 notify.go:220] Checking for updates...
	I0120 12:31:11.964011  993585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:31:11.965092  993585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:11.966144  993585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:31:11.967208  993585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:31:11.968350  993585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:31:11.969863  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:11.970277  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.970346  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:11.985419  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0120 12:31:11.985879  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:11.986551  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:11.986596  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:11.986957  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:11.987146  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:11.988784  993585 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:31:11.989825  993585 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:31:11.990150  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.990189  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.004831  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0120 12:31:12.005226  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.005709  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.005734  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.006077  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.006313  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.043016  993585 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:31:12.044104  993585 start.go:297] selected driver: kvm2
	I0120 12:31:12.044121  993585 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
34433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.044209  993585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:31:12.044916  993585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.045000  993585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:31:12.060200  993585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:31:12.060534  993585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:31:12.060567  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:12.060601  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:12.060657  993585 start.go:340] cluster config:
	{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.060783  993585 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.062963  993585 out.go:177] * Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	I0120 12:31:12.064143  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:12.064184  993585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:31:12.064195  993585 cache.go:56] Caching tarball of preloaded images
	I0120 12:31:12.064275  993585 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:31:12.064287  993585 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:31:12.064378  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:12.064565  993585 start.go:360] acquireMachinesLock for old-k8s-version-134433: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:31:12.064608  993585 start.go:364] duration metric: took 25.197µs to acquireMachinesLock for "old-k8s-version-134433"
	I0120 12:31:12.064624  993585 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:31:12.064632  993585 fix.go:54] fixHost starting: 
	I0120 12:31:12.064897  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:12.064947  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.079979  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0120 12:31:12.080385  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.080944  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.080969  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.081279  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.081512  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.081673  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetState
	I0120 12:31:12.083222  993585 fix.go:112] recreateIfNeeded on old-k8s-version-134433: state=Stopped err=<nil>
	I0120 12:31:12.083247  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	W0120 12:31:12.083395  993585 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:31:12.084950  993585 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-134433" ...
	I0120 12:31:07.641120  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.142764  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.684376  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.684889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:11.967640  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:13.968387  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.086040  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .Start
	I0120 12:31:12.086250  993585 main.go:141] libmachine: (old-k8s-version-134433) starting domain...
	I0120 12:31:12.086274  993585 main.go:141] libmachine: (old-k8s-version-134433) ensuring networks are active...
	I0120 12:31:12.087116  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network default is active
	I0120 12:31:12.087507  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network mk-old-k8s-version-134433 is active
	I0120 12:31:12.087972  993585 main.go:141] libmachine: (old-k8s-version-134433) getting domain XML...
	I0120 12:31:12.088701  993585 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:31:13.353235  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for IP...
	I0120 12:31:13.354008  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.354424  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.354568  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.354436  993621 retry.go:31] will retry after 195.738853ms: waiting for domain to come up
	I0120 12:31:13.551979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.552485  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.552546  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.552470  993621 retry.go:31] will retry after 286.807934ms: waiting for domain to come up
	I0120 12:31:13.841028  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.841561  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.841601  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.841522  993621 retry.go:31] will retry after 438.177816ms: waiting for domain to come up
	I0120 12:31:14.280867  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.281254  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.281287  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.281212  993621 retry.go:31] will retry after 401.413585ms: waiting for domain to come up
	I0120 12:31:14.684677  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.685256  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.685288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.685176  993621 retry.go:31] will retry after 625.770313ms: waiting for domain to come up
	I0120 12:31:15.312721  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:15.313245  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:15.313281  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:15.313210  993621 retry.go:31] will retry after 842.789855ms: waiting for domain to come up
	I0120 12:31:16.157329  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:16.157939  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:16.157970  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:16.157917  993621 retry.go:31] will retry after 997.649049ms: waiting for domain to come up
	I0120 12:31:12.642593  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:15.141471  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.141620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:14.686169  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.184821  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:16.467025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:18.966945  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.157668  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:17.158288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:17.158346  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:17.158266  993621 retry.go:31] will retry after 1.3317802s: waiting for domain to come up
	I0120 12:31:18.491767  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:18.492314  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:18.492345  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:18.492274  993621 retry.go:31] will retry after 1.684115629s: waiting for domain to come up
	I0120 12:31:20.177742  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:20.178312  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:20.178344  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:20.178272  993621 retry.go:31] will retry after 2.098717757s: waiting for domain to come up
	I0120 12:31:19.141727  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.142012  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.684947  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.686415  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:24.185262  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:20.969393  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.466563  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.468388  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:22.279263  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:22.279782  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:22.279815  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:22.279747  993621 retry.go:31] will retry after 2.908067158s: waiting for domain to come up
	I0120 12:31:25.191591  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:25.192058  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:25.192082  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:25.192027  993621 retry.go:31] will retry after 2.860704715s: waiting for domain to come up
	I0120 12:31:23.142601  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.641748  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:26.685300  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:29.186578  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:27.967731  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.467076  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:28.053824  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:28.054209  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:28.054237  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:28.054168  993621 retry.go:31] will retry after 3.593877393s: waiting for domain to come up
	I0120 12:31:31.651977  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652456  993585 main.go:141] libmachine: (old-k8s-version-134433) found domain IP: 192.168.50.250
	I0120 12:31:31.652477  993585 main.go:141] libmachine: (old-k8s-version-134433) reserving static IP address...
	I0120 12:31:31.652499  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has current primary IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652880  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.652910  993585 main.go:141] libmachine: (old-k8s-version-134433) reserved static IP address 192.168.50.250 for domain old-k8s-version-134433
	I0120 12:31:31.652928  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | skip adding static IP to network mk-old-k8s-version-134433 - found existing host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"}
	I0120 12:31:31.652949  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for SSH...
	I0120 12:31:31.652979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Getting to WaitForSSH function...
	I0120 12:31:31.655045  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655323  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.655341  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655472  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH client type: external
	I0120 12:31:31.655509  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa (-rw-------)
	I0120 12:31:31.655555  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:31:31.655574  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | About to run SSH command:
	I0120 12:31:31.655599  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | exit 0
	I0120 12:31:31.778333  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | SSH cmd err, output: <nil>: 
	I0120 12:31:31.778766  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:31:31.779451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:31.782111  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782481  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.782538  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782728  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:31.782983  993585 machine.go:93] provisionDockerMachine start ...
	I0120 12:31:31.783008  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:31.783221  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.785482  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785771  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.785804  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785958  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.786153  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786352  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786496  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.786666  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.786905  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.786918  993585 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:31:31.886822  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:31:31.886860  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887127  993585 buildroot.go:166] provisioning hostname "old-k8s-version-134433"
	I0120 12:31:31.887156  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887366  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.890506  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.890962  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.891053  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.891155  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.891355  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891522  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.891900  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.892067  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.892078  993585 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-134433 && echo "old-k8s-version-134433" | sudo tee /etc/hostname
	I0120 12:31:27.642107  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.141452  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.142854  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.007463  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-134433
	
	I0120 12:31:32.007490  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.010730  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011157  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.011184  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011407  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.011597  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011774  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011883  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.012032  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.012246  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.012275  993585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-134433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-134433/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-134433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:31:32.122811  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:31:32.122845  993585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:31:32.122865  993585 buildroot.go:174] setting up certificates
	I0120 12:31:32.122875  993585 provision.go:84] configureAuth start
	I0120 12:31:32.122884  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:32.123125  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.125986  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126423  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.126446  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126677  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.128626  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129281  993585 provision.go:143] copyHostCerts
	I0120 12:31:32.129354  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:31:32.129380  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:31:32.129382  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.129411  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129470  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:31:32.129581  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:31:32.129592  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:31:32.129634  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:31:32.129702  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:31:32.129712  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:31:32.129741  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:31:32.129806  993585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-134433 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-134433]
	I0120 12:31:32.226358  993585 provision.go:177] copyRemoteCerts
	I0120 12:31:32.226410  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:31:32.226432  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.228814  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229133  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.229168  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229333  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.229548  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.229722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.229881  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.315787  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:31:32.341389  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:31:32.364095  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:31:32.386543  993585 provision.go:87] duration metric: took 263.65519ms to configureAuth
	I0120 12:31:32.386572  993585 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:31:32.386750  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:32.386844  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.389737  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390222  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.390257  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390478  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.390683  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.390858  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.391063  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.391234  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.391417  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.391438  993585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:31:32.617034  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:31:32.617072  993585 machine.go:96] duration metric: took 834.071068ms to provisionDockerMachine
	I0120 12:31:32.617085  993585 start.go:293] postStartSetup for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:31:32.617096  993585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:31:32.617121  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.617506  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:31:32.617547  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.620838  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621275  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.621310  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621640  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.621865  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.622064  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.622248  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.703904  993585 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:31:32.707878  993585 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:31:32.707902  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:31:32.707970  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:31:32.708078  993585 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:31:32.708218  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:31:32.716746  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:32.739636  993585 start.go:296] duration metric: took 122.539492ms for postStartSetup
	I0120 12:31:32.739674  993585 fix.go:56] duration metric: took 20.675041615s for fixHost
	I0120 12:31:32.739700  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.742857  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743259  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.743291  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.743616  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743807  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743953  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.744112  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.744267  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.744277  993585 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:31:32.850613  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376292.825194263
	
	I0120 12:31:32.850655  993585 fix.go:216] guest clock: 1737376292.825194263
	I0120 12:31:32.850667  993585 fix.go:229] Guest: 2025-01-20 12:31:32.825194263 +0000 UTC Remote: 2025-01-20 12:31:32.739679914 +0000 UTC m=+20.823511960 (delta=85.514349ms)
	I0120 12:31:32.850692  993585 fix.go:200] guest clock delta is within tolerance: 85.514349ms
	I0120 12:31:32.850697  993585 start.go:83] releasing machines lock for "old-k8s-version-134433", held for 20.786078788s
	I0120 12:31:32.850723  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.850994  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.853508  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.853864  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.853895  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.854081  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854574  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854785  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854878  993585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:31:32.854915  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.855040  993585 ssh_runner.go:195] Run: cat /version.json
	I0120 12:31:32.855073  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.857825  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858071  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858242  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858273  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858472  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858613  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858642  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858678  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.858803  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858907  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.858970  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.859042  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.859089  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.859218  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.963636  993585 ssh_runner.go:195] Run: systemctl --version
	I0120 12:31:32.969637  993585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:31:33.109368  993585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:31:33.116476  993585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:31:33.116551  993585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:31:33.132563  993585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:31:33.132586  993585 start.go:495] detecting cgroup driver to use...
	I0120 12:31:33.132666  993585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:31:33.149598  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:31:33.163579  993585 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:31:33.163644  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:31:33.176714  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:31:33.190002  993585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:31:33.317215  993585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:31:33.474712  993585 docker.go:233] disabling docker service ...
	I0120 12:31:33.474786  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:31:33.487733  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:31:33.500315  993585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:31:33.629138  993585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:31:33.765704  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:31:33.780662  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:31:33.799085  993585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:31:33.799155  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.808607  993585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:31:33.808659  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.818065  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.827515  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.837226  993585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:31:33.846616  993585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:31:33.855024  993585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:31:33.855077  993585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:31:33.867670  993585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:31:33.876402  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:34.006664  993585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:31:34.098750  993585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:31:34.098834  993585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:31:34.103642  993585 start.go:563] Will wait 60s for crictl version
	I0120 12:31:34.103699  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:34.107125  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:31:34.144190  993585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:31:34.144288  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.172817  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.203224  993585 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:31:31.684648  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.685881  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.467705  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.470006  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.204485  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:34.207458  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.207876  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:34.207904  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.208137  993585 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:31:34.211891  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:34.223705  993585 kubeadm.go:883] updating cluster {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:31:34.223826  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:34.223864  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:34.268289  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:34.268365  993585 ssh_runner.go:195] Run: which lz4
	I0120 12:31:34.272014  993585 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:31:34.275957  993585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:31:34.275987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:31:35.756157  993585 crio.go:462] duration metric: took 1.484200004s to copy over tarball
	I0120 12:31:35.756230  993585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:31:34.642634  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:37.142882  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:35.687588  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.185847  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:36.967824  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.968146  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.594323  993585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838057752s)
	I0120 12:31:38.594429  993585 crio.go:469] duration metric: took 2.838184511s to extract the tarball
	I0120 12:31:38.594454  993585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:31:38.636288  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:38.673987  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:38.674016  993585 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:31:38.674097  993585 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.674135  993585 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:31:38.674145  993585 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.674178  993585 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.674112  993585 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.674208  993585 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.674120  993585 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.674479  993585 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675856  993585 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.675888  993585 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.675858  993585 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.675860  993585 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.891668  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:31:38.898693  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.901324  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.903830  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.907827  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.909691  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.911977  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.988279  993585 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:31:38.988332  993585 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:31:38.988388  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.039162  993585 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:31:39.039204  993585 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.039255  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.070879  993585 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:31:39.070922  993585 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.070974  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078869  993585 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:31:39.078897  993585 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:31:39.078910  993585 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.078930  993585 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.078948  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078955  993585 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:31:39.078982  993585 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.078982  993585 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:31:39.079004  993585 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.079014  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078986  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079039  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079028  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.079059  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.081555  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.083015  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.130647  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.130694  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.186867  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.186961  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.186966  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.209991  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.210008  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.246249  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.246259  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.321520  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.321580  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.336397  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.361423  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.361625  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.382747  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:31:39.382804  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.434483  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.434505  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.494972  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:31:39.495045  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:31:39.520487  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:31:39.520534  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:31:39.529832  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:31:39.530428  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:31:39.865446  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:40.001428  993585 cache_images.go:92] duration metric: took 1.327395723s to LoadCachedImages
	W0120 12:31:40.001521  993585 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0120 12:31:40.001540  993585 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I0120 12:31:40.001666  993585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-134433 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:31:40.001759  993585 ssh_runner.go:195] Run: crio config
	I0120 12:31:40.049768  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:40.049788  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:40.049798  993585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:31:40.049817  993585 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-134433 NodeName:old-k8s-version-134433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:31:40.049953  993585 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-134433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:31:40.050035  993585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:31:40.060513  993585 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:31:40.060576  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:31:40.070416  993585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 12:31:40.086321  993585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:31:40.101428  993585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:31:40.118688  993585 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0120 12:31:40.122319  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:40.133757  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:40.267585  993585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:31:40.285307  993585 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433 for IP: 192.168.50.250
	I0120 12:31:40.285334  993585 certs.go:194] generating shared ca certs ...
	I0120 12:31:40.285359  993585 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.285629  993585 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:31:40.285712  993585 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:31:40.285729  993585 certs.go:256] generating profile certs ...
	I0120 12:31:40.285868  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key
	I0120 12:31:40.320727  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93
	I0120 12:31:40.320836  993585 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key
	I0120 12:31:40.321012  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:31:40.321045  993585 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:31:40.321055  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:31:40.321077  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:31:40.321112  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:31:40.321133  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:31:40.321173  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:40.321820  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:31:40.355849  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:31:40.384987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:31:40.412042  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:31:40.443057  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:31:40.487592  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:31:40.524256  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:31:40.548205  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:31:40.570407  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:31:40.594640  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:31:40.617736  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:31:40.642388  993585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:31:40.658180  993585 ssh_runner.go:195] Run: openssl version
	I0120 12:31:40.663613  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:31:40.673079  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677607  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677688  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.684863  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:31:40.694838  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:31:40.704251  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708616  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708671  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.714178  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:31:40.723770  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:31:40.733248  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737473  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737526  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.742896  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:31:40.752426  993585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:31:40.756579  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:31:40.761769  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:31:40.766935  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:31:40.772427  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:31:40.777720  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:31:40.782945  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:31:40.788029  993585 kubeadm.go:392] StartCluster: {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:40.788161  993585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:31:40.788202  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.825500  993585 cri.go:89] found id: ""
	I0120 12:31:40.825563  993585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:31:40.835567  993585 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:31:40.835588  993585 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:31:40.835635  993585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:31:40.845152  993585 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:31:40.845853  993585 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:40.846275  993585 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-134433" cluster setting kubeconfig missing "old-k8s-version-134433" context setting]
	I0120 12:31:40.846897  993585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.937033  993585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:31:40.947319  993585 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I0120 12:31:40.947380  993585 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:31:40.947395  993585 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:31:40.947453  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.984392  993585 cri.go:89] found id: ""
	I0120 12:31:40.984458  993585 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:31:41.001578  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:31:41.011794  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:31:41.011819  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:31:41.011875  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:31:41.021463  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:31:41.021518  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:31:41.030836  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:31:41.040645  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:31:41.040698  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:31:41.049821  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.058040  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:31:41.058097  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.066553  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:31:41.075225  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:31:41.075281  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:31:41.084906  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:31:41.093515  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.210064  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.666359  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.900869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:39.144316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.165382  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.817405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.185212  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.468125  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.966550  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:42.000812  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:42.089692  993585 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:31:42.089772  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:42.590338  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.090787  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.590769  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.090319  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.590108  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.089838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.590766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.089997  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.590717  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.642362  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:46.140694  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.684419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:48.185535  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.967037  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.967799  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.468120  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.090580  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:47.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.090251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.589947  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.090785  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.590768  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.090614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.590558  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.090311  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.590228  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.141706  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.641289  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.684323  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.684538  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.968580  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:55.466922  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.090647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.090104  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.590691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.090868  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.590219  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.090350  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.590003  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.090726  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.590283  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.641982  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.643173  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.142153  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.685013  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.186057  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.967658  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.968521  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.089873  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:57.590850  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.090780  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.590614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.090635  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.590451  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.090701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.590640  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.090753  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.590644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.640970  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.641596  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.684870  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.685889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.185105  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.466874  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.467851  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.089853  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:02.590807  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.089981  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.590808  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.090857  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.590757  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.089933  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.590271  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.090623  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.590064  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.644442  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.140708  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.185872  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.683979  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.468061  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.966912  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:07.090783  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:07.589932  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.090055  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.590241  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.089915  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.590298  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.089954  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.590262  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.090497  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.142135  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.142823  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.685405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.184959  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:11.467184  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.966687  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:12.090562  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.590135  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.090747  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.590675  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.089959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.090313  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.590672  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.090234  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.590838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.641948  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.141465  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.685252  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.685468  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.968298  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:18.466913  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.589874  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.089914  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.589959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.090841  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.590272  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.090818  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.590893  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.590656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.641252  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:19.642645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.140826  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.184125  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.184670  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.184995  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.967285  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.967592  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:25.467420  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.090802  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:22.589928  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.090636  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.590707  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.090639  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.590650  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.089995  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.590660  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.090132  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.590033  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.141192  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.641799  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.684732  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.185287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.467860  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.967353  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.090577  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:27.590867  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.090984  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.590845  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.090300  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.590066  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.090684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.590040  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.090303  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.590795  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.642020  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.141741  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.685583  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.184568  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.967618  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.468025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:32.090206  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:32.590714  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.090718  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.590378  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.090656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.590435  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.090317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.590516  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.090582  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.142049  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.142316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.185027  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.684930  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.967096  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:39.467542  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:37.090078  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.590663  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.090428  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.089913  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.590888  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.090661  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.590041  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.090883  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.590739  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.641649  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.140763  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.141742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.686049  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:43.188216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:41.966891  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:44.467792  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.090408  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.090485  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.129790  993585 cri.go:89] found id: ""
	I0120 12:32:42.129819  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.129826  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:42.129832  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.129887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.160523  993585 cri.go:89] found id: ""
	I0120 12:32:42.160546  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.160555  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:42.160560  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.160606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.194768  993585 cri.go:89] found id: ""
	I0120 12:32:42.194796  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.194803  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:42.194808  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.194878  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.226406  993585 cri.go:89] found id: ""
	I0120 12:32:42.226435  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.226443  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:42.226448  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.226497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:42.263295  993585 cri.go:89] found id: ""
	I0120 12:32:42.263328  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.263352  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:42.263362  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:42.263419  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:42.293754  993585 cri.go:89] found id: ""
	I0120 12:32:42.293784  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.293794  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:42.293803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:42.293866  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:42.327600  993585 cri.go:89] found id: ""
	I0120 12:32:42.327631  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.327642  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:42.327650  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:42.327702  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:42.356668  993585 cri.go:89] found id: ""
	I0120 12:32:42.356698  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.356710  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:42.356722  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:42.356734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:42.405030  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:42.405063  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:42.417663  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:42.417690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:42.538067  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:42.538100  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:42.538122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:42.607706  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:42.607743  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:45.149684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:45.161947  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:45.162031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:45.204014  993585 cri.go:89] found id: ""
	I0120 12:32:45.204049  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.204060  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:45.204068  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:45.204129  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:45.245164  993585 cri.go:89] found id: ""
	I0120 12:32:45.245196  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.245206  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:45.245214  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:45.245278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:45.285368  993585 cri.go:89] found id: ""
	I0120 12:32:45.285401  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.285412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:45.285420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:45.285482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:45.322496  993585 cri.go:89] found id: ""
	I0120 12:32:45.322551  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.322564  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:45.322573  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:45.322632  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:45.353693  993585 cri.go:89] found id: ""
	I0120 12:32:45.353723  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.353731  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:45.353737  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:45.353786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:45.385705  993585 cri.go:89] found id: ""
	I0120 12:32:45.385735  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.385744  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:45.385750  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:45.385800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:45.419199  993585 cri.go:89] found id: ""
	I0120 12:32:45.419233  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.419243  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:45.419251  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:45.419317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:45.453757  993585 cri.go:89] found id: ""
	I0120 12:32:45.453789  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.453800  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:45.453813  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:45.453828  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:45.502873  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:45.502902  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:45.515215  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:45.515240  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:45.581415  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:45.581443  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:45.581458  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:45.665418  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:45.665450  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:44.641564  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.642075  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:45.685384  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.184725  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.967382  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.971509  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.203193  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:48.215966  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:48.216028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:48.247173  993585 cri.go:89] found id: ""
	I0120 12:32:48.247201  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.247212  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:48.247219  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:48.247280  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:48.279393  993585 cri.go:89] found id: ""
	I0120 12:32:48.279421  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.279428  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:48.279434  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:48.279488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:48.310392  993585 cri.go:89] found id: ""
	I0120 12:32:48.310416  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.310423  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:48.310429  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:48.310473  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:48.342762  993585 cri.go:89] found id: ""
	I0120 12:32:48.342794  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.342803  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:48.342811  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:48.342869  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:48.373905  993585 cri.go:89] found id: ""
	I0120 12:32:48.373931  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.373942  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:48.373952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:48.374016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:48.406406  993585 cri.go:89] found id: ""
	I0120 12:32:48.406435  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.406443  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:48.406449  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:48.406494  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:48.442695  993585 cri.go:89] found id: ""
	I0120 12:32:48.442728  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.442738  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:48.442746  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:48.442813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:48.474459  993585 cri.go:89] found id: ""
	I0120 12:32:48.474485  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.474494  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:48.474506  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:48.474535  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:48.522305  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:48.522337  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:48.535295  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:48.535322  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:48.605460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.605493  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:48.605510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:48.689980  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:48.690012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.228008  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:51.240647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:51.240708  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:51.274219  993585 cri.go:89] found id: ""
	I0120 12:32:51.274255  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.274267  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:51.274275  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:51.274347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:51.307904  993585 cri.go:89] found id: ""
	I0120 12:32:51.307930  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.307939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:51.307948  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:51.308000  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:51.342253  993585 cri.go:89] found id: ""
	I0120 12:32:51.342280  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.342288  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:51.342294  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:51.342340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:51.372185  993585 cri.go:89] found id: ""
	I0120 12:32:51.372211  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.372218  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:51.372224  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:51.372268  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:51.402807  993585 cri.go:89] found id: ""
	I0120 12:32:51.402840  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.402851  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:51.402858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:51.402932  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:51.434101  993585 cri.go:89] found id: ""
	I0120 12:32:51.434129  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.434139  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:51.434147  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:51.434217  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:51.467394  993585 cri.go:89] found id: ""
	I0120 12:32:51.467422  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.467431  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:51.467438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:51.467505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:51.498551  993585 cri.go:89] found id: ""
	I0120 12:32:51.498582  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.498592  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:51.498604  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:51.498619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:51.577501  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:51.577533  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.618784  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:51.618825  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:51.671630  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:51.671667  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:51.685726  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:51.685750  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:51.751392  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.642162  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.142915  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:50.685157  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.185189  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.468237  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.967177  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:54.251524  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.265218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.265281  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.299773  993585 cri.go:89] found id: ""
	I0120 12:32:54.299804  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.299813  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:54.299820  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.299867  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.330432  993585 cri.go:89] found id: ""
	I0120 12:32:54.330461  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.330471  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:54.330479  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.330565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.366364  993585 cri.go:89] found id: ""
	I0120 12:32:54.366400  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.366412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:54.366420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:54.366480  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:54.398373  993585 cri.go:89] found id: ""
	I0120 12:32:54.398407  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.398417  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:54.398425  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:54.398486  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:54.437033  993585 cri.go:89] found id: ""
	I0120 12:32:54.437064  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.437074  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:54.437081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:54.437141  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:54.475179  993585 cri.go:89] found id: ""
	I0120 12:32:54.475203  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.475211  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:54.475218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:54.475276  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:54.507372  993585 cri.go:89] found id: ""
	I0120 12:32:54.507410  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.507420  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:54.507428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:54.507484  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:54.538317  993585 cri.go:89] found id: ""
	I0120 12:32:54.538351  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.538362  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:54.538379  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:54.538400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:54.620638  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:54.620683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:54.657830  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:54.657859  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:54.707420  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:54.707448  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:54.719611  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:54.719640  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:54.784727  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:53.643750  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.141402  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:55.684905  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.686081  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.467036  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.468431  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.469379  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.285771  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:57.298606  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:57.298677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:57.330216  993585 cri.go:89] found id: ""
	I0120 12:32:57.330245  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.330254  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:57.330260  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:57.330317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:57.362111  993585 cri.go:89] found id: ""
	I0120 12:32:57.362152  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.362162  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:57.362169  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:57.362220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:57.395597  993585 cri.go:89] found id: ""
	I0120 12:32:57.395624  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.395634  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:57.395640  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:57.395700  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:57.425897  993585 cri.go:89] found id: ""
	I0120 12:32:57.425925  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.425933  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:57.425939  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:57.425986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:57.458500  993585 cri.go:89] found id: ""
	I0120 12:32:57.458544  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.458554  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:57.458563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:57.458625  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:57.489583  993585 cri.go:89] found id: ""
	I0120 12:32:57.489616  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.489626  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:57.489634  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:57.489685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:57.520588  993585 cri.go:89] found id: ""
	I0120 12:32:57.520617  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.520624  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:57.520630  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:57.520676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:57.555799  993585 cri.go:89] found id: ""
	I0120 12:32:57.555824  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.555833  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:57.555843  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:57.555855  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:57.605038  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:57.605071  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:57.619575  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:57.619603  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:57.686685  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:57.686703  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:57.686731  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:57.762968  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:57.763003  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:00.306647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:00.321029  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:00.321083  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:00.355924  993585 cri.go:89] found id: ""
	I0120 12:33:00.355954  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.355963  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:00.355969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:00.356021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:00.390766  993585 cri.go:89] found id: ""
	I0120 12:33:00.390793  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.390801  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:00.390807  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:00.390855  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:00.424790  993585 cri.go:89] found id: ""
	I0120 12:33:00.424820  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.424828  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:00.424833  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:00.424880  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:00.454941  993585 cri.go:89] found id: ""
	I0120 12:33:00.454975  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.454987  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:00.454995  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:00.455056  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:00.488642  993585 cri.go:89] found id: ""
	I0120 12:33:00.488670  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.488679  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:00.488684  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:00.488731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:00.518470  993585 cri.go:89] found id: ""
	I0120 12:33:00.518501  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.518511  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:00.518535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:00.518595  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:00.554139  993585 cri.go:89] found id: ""
	I0120 12:33:00.554167  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.554174  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:00.554180  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:00.554236  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:00.587766  993585 cri.go:89] found id: ""
	I0120 12:33:00.587792  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.587799  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:00.587809  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:00.587821  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:00.639504  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:00.639541  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:00.651660  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:00.651687  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:00.725669  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:00.725697  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:00.725716  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:00.806460  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:00.806496  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:58.642200  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:01.142620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.184931  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.684980  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.967537  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:05.467661  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:03.341420  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:03.354948  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:03.355022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:03.389867  993585 cri.go:89] found id: ""
	I0120 12:33:03.389965  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.389977  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:03.389986  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:03.390042  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:03.421478  993585 cri.go:89] found id: ""
	I0120 12:33:03.421505  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.421517  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:03.421525  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:03.421593  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:03.453805  993585 cri.go:89] found id: ""
	I0120 12:33:03.453838  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.453850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:03.453858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:03.453917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:03.487503  993585 cri.go:89] found id: ""
	I0120 12:33:03.487536  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.487547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:03.487555  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:03.487621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:03.517560  993585 cri.go:89] found id: ""
	I0120 12:33:03.517585  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.517594  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:03.517602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:03.517661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:03.547328  993585 cri.go:89] found id: ""
	I0120 12:33:03.547368  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.547380  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:03.547389  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:03.547447  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:03.580215  993585 cri.go:89] found id: ""
	I0120 12:33:03.580242  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.580251  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:03.580256  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:03.580319  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:03.613176  993585 cri.go:89] found id: ""
	I0120 12:33:03.613208  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.613220  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:03.613233  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:03.613247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:03.667093  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:03.667129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:03.680234  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:03.680260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:03.744763  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:03.744788  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:03.744805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.824813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:03.824856  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.364296  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:06.377247  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:06.377314  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:06.408701  993585 cri.go:89] found id: ""
	I0120 12:33:06.408725  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.408733  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:06.408738  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:06.408800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:06.440716  993585 cri.go:89] found id: ""
	I0120 12:33:06.440744  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.440752  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:06.440758  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:06.440811  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:06.471832  993585 cri.go:89] found id: ""
	I0120 12:33:06.471866  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.471877  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:06.471884  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:06.471947  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:06.504122  993585 cri.go:89] found id: ""
	I0120 12:33:06.504149  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.504157  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:06.504163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:06.504214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:06.535353  993585 cri.go:89] found id: ""
	I0120 12:33:06.535386  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.535397  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:06.535405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:06.535460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:06.571284  993585 cri.go:89] found id: ""
	I0120 12:33:06.571309  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.571316  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:06.571322  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:06.571379  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:06.604008  993585 cri.go:89] found id: ""
	I0120 12:33:06.604042  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.604055  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:06.604062  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:06.604142  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:06.636221  993585 cri.go:89] found id: ""
	I0120 12:33:06.636258  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.636270  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:06.636284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:06.636299  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.671820  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:06.671845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:06.723338  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:06.723369  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:06.736258  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:06.736285  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:06.807310  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:06.807336  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:06.807352  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.642811  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:06.141374  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:04.685422  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.184287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.185215  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.469260  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.967169  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.386909  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:09.399300  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:09.399363  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:09.431976  993585 cri.go:89] found id: ""
	I0120 12:33:09.432013  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.432025  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:09.432032  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:09.432085  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:09.468016  993585 cri.go:89] found id: ""
	I0120 12:33:09.468042  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.468053  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:09.468061  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:09.468124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:09.501613  993585 cri.go:89] found id: ""
	I0120 12:33:09.501648  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.501657  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:09.501667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:09.501734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:09.535261  993585 cri.go:89] found id: ""
	I0120 12:33:09.535296  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.535308  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:09.535315  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:09.535382  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:09.569838  993585 cri.go:89] found id: ""
	I0120 12:33:09.569873  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.569885  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:09.569893  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:09.569961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:09.601673  993585 cri.go:89] found id: ""
	I0120 12:33:09.601701  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.601709  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:09.601714  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:09.601773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:09.638035  993585 cri.go:89] found id: ""
	I0120 12:33:09.638068  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.638080  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:09.638089  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:09.638155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:09.671128  993585 cri.go:89] found id: ""
	I0120 12:33:09.671149  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.671156  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:09.671165  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:09.671178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:09.723616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:09.723648  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:09.737987  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:09.738020  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:09.810583  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:09.810613  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:09.810627  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:09.887641  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:09.887676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:08.141896  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:10.642250  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.685128  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.686705  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.968039  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.962039  992109 pod_ready.go:82] duration metric: took 4m0.001004044s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" ...
	E0120 12:33:13.962067  992109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:33:13.962099  992109 pod_ready.go:39] duration metric: took 4m14.545589853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:13.962140  992109 kubeadm.go:597] duration metric: took 4m21.118193658s to restartPrimaryControlPlane
	W0120 12:33:13.962239  992109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:33:13.962281  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:33:12.423728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:12.437277  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:12.437368  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:12.470427  993585 cri.go:89] found id: ""
	I0120 12:33:12.470455  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.470463  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:12.470468  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:12.470546  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:12.501063  993585 cri.go:89] found id: ""
	I0120 12:33:12.501103  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.501130  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:12.501138  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:12.501287  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:12.535254  993585 cri.go:89] found id: ""
	I0120 12:33:12.535284  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.535295  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:12.535303  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:12.535354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:12.568250  993585 cri.go:89] found id: ""
	I0120 12:33:12.568289  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.568301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:12.568307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:12.568372  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:12.599927  993585 cri.go:89] found id: ""
	I0120 12:33:12.599961  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.599970  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:12.599976  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:12.600031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:12.632502  993585 cri.go:89] found id: ""
	I0120 12:33:12.632537  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.632549  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:12.632559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:12.632620  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:12.664166  993585 cri.go:89] found id: ""
	I0120 12:33:12.664200  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.664208  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:12.664216  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:12.664270  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:12.697996  993585 cri.go:89] found id: ""
	I0120 12:33:12.698028  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.698039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:12.698054  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:12.698070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:12.751712  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:12.751745  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:12.765184  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:12.765213  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:12.830999  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:12.831027  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:12.831046  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:12.911211  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:12.911244  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:15.449634  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:15.464863  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:15.464931  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:15.495576  993585 cri.go:89] found id: ""
	I0120 12:33:15.495609  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.495620  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:15.495629  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:15.495689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:15.525730  993585 cri.go:89] found id: ""
	I0120 12:33:15.525757  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.525767  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:15.525775  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:15.525832  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:15.556077  993585 cri.go:89] found id: ""
	I0120 12:33:15.556117  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.556127  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:15.556135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:15.556195  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:15.585820  993585 cri.go:89] found id: ""
	I0120 12:33:15.585852  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.585860  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:15.585867  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:15.585924  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:15.615985  993585 cri.go:89] found id: ""
	I0120 12:33:15.616027  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.616035  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:15.616041  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:15.616093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:15.648570  993585 cri.go:89] found id: ""
	I0120 12:33:15.648604  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.648611  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:15.648617  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:15.648664  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:15.678674  993585 cri.go:89] found id: ""
	I0120 12:33:15.678704  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.678714  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:15.678721  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:15.678786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:15.708444  993585 cri.go:89] found id: ""
	I0120 12:33:15.708468  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.708476  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:15.708485  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:15.708500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:15.758053  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:15.758083  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:15.770661  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:15.770688  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:15.833234  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:15.833257  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:15.833271  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:15.906939  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:15.906969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:13.142031  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:15.642742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:16.184659  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.185053  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.442922  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:18.455489  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:18.455557  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:18.495102  993585 cri.go:89] found id: ""
	I0120 12:33:18.495135  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.495145  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:18.495154  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:18.495225  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:18.530047  993585 cri.go:89] found id: ""
	I0120 12:33:18.530078  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.530094  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:18.530102  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:18.530165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:18.566556  993585 cri.go:89] found id: ""
	I0120 12:33:18.566585  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.566595  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:18.566602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:18.566661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:18.604783  993585 cri.go:89] found id: ""
	I0120 12:33:18.604819  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.604834  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:18.604842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:18.604913  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:18.638998  993585 cri.go:89] found id: ""
	I0120 12:33:18.639025  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.639036  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:18.639043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:18.639107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:18.669083  993585 cri.go:89] found id: ""
	I0120 12:33:18.669121  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.669130  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:18.669136  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:18.669192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:18.701062  993585 cri.go:89] found id: ""
	I0120 12:33:18.701089  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.701097  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:18.701115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:18.701180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:18.732086  993585 cri.go:89] found id: ""
	I0120 12:33:18.732131  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.732142  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:18.732157  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:18.732174  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:18.779325  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:18.779357  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:18.792530  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:18.792565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:18.863429  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:18.863452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:18.863464  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:18.941343  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:18.941375  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:21.481380  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:21.493618  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:21.493699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:21.524040  993585 cri.go:89] found id: ""
	I0120 12:33:21.524067  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.524075  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:21.524081  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:21.524149  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:21.554666  993585 cri.go:89] found id: ""
	I0120 12:33:21.554698  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.554708  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:21.554715  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:21.554762  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:21.585584  993585 cri.go:89] found id: ""
	I0120 12:33:21.585610  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.585617  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:21.585623  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:21.585670  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:21.615611  993585 cri.go:89] found id: ""
	I0120 12:33:21.615646  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.615657  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:21.615666  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:21.615715  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:21.646761  993585 cri.go:89] found id: ""
	I0120 12:33:21.646788  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.646796  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:21.646801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:21.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:21.681380  993585 cri.go:89] found id: ""
	I0120 12:33:21.681410  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.681420  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:21.681428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:21.681488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:21.712708  993585 cri.go:89] found id: ""
	I0120 12:33:21.712743  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.712759  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:21.712766  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:21.712828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:21.746105  993585 cri.go:89] found id: ""
	I0120 12:33:21.746132  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.746140  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:21.746150  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:21.746162  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:21.795702  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:21.795744  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:21.807548  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:21.807570  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:21.869605  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:21.869627  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:21.869646  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:21.941092  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:21.941120  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:18.142112  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.642242  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.185265  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:22.684404  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.487520  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:24.501031  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:24.501119  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:24.533191  993585 cri.go:89] found id: ""
	I0120 12:33:24.533220  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.533230  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:24.533237  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:24.533300  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:24.565809  993585 cri.go:89] found id: ""
	I0120 12:33:24.565837  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.565845  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:24.565850  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:24.565902  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:24.600607  993585 cri.go:89] found id: ""
	I0120 12:33:24.600643  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.600655  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:24.600663  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:24.600742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:24.637320  993585 cri.go:89] found id: ""
	I0120 12:33:24.637354  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.637365  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:24.637373  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:24.637433  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:24.674906  993585 cri.go:89] found id: ""
	I0120 12:33:24.674940  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.674952  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:24.674960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:24.675024  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:24.707058  993585 cri.go:89] found id: ""
	I0120 12:33:24.707084  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.707091  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:24.707097  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:24.707159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:24.740554  993585 cri.go:89] found id: ""
	I0120 12:33:24.740590  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.740603  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:24.740614  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:24.740680  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:24.773021  993585 cri.go:89] found id: ""
	I0120 12:33:24.773052  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.773064  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:24.773077  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:24.773094  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:24.863129  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:24.863156  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:24.863169  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:24.939479  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:24.939516  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:24.975325  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:24.975358  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:25.026952  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:25.026993  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:23.141922  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:25.142300  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.685216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:26.687261  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:29.183496  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:27.539957  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:27.553387  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:27.553449  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:27.587773  993585 cri.go:89] found id: ""
	I0120 12:33:27.587804  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.587812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:27.587818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:27.587868  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:27.617735  993585 cri.go:89] found id: ""
	I0120 12:33:27.617767  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.617777  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:27.617785  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:27.617865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:27.652958  993585 cri.go:89] found id: ""
	I0120 12:33:27.652978  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.652985  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:27.652990  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:27.653047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:27.686924  993585 cri.go:89] found id: ""
	I0120 12:33:27.686947  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.686954  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:27.686960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:27.687012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:27.720217  993585 cri.go:89] found id: ""
	I0120 12:33:27.720246  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.720258  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:27.720265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:27.720334  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:27.757382  993585 cri.go:89] found id: ""
	I0120 12:33:27.757418  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.757430  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:27.757438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:27.757504  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:27.788498  993585 cri.go:89] found id: ""
	I0120 12:33:27.788528  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.788538  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:27.788546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:27.788616  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:27.820146  993585 cri.go:89] found id: ""
	I0120 12:33:27.820178  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.820186  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:27.820196  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:27.820207  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:27.832201  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:27.832225  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:27.905179  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:27.905202  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:27.905227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:27.984792  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:27.984829  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:28.027290  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:28.027397  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.578691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:30.591302  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:30.591365  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:30.627747  993585 cri.go:89] found id: ""
	I0120 12:33:30.627775  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.627802  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:30.627810  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:30.627881  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:30.674653  993585 cri.go:89] found id: ""
	I0120 12:33:30.674684  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.674694  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:30.674702  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:30.674766  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:30.716811  993585 cri.go:89] found id: ""
	I0120 12:33:30.716839  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.716850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:30.716857  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:30.716922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:30.749623  993585 cri.go:89] found id: ""
	I0120 12:33:30.749655  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.749666  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:30.749674  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:30.749742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:30.780140  993585 cri.go:89] found id: ""
	I0120 12:33:30.780172  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.780180  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:30.780186  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:30.780241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:30.808356  993585 cri.go:89] found id: ""
	I0120 12:33:30.808387  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.808395  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:30.808407  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:30.808476  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:30.842019  993585 cri.go:89] found id: ""
	I0120 12:33:30.842047  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.842054  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:30.842060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:30.842109  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:30.871526  993585 cri.go:89] found id: ""
	I0120 12:33:30.871551  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.871559  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:30.871568  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:30.871581  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.919022  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:30.919051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:30.931897  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:30.931933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:30.993261  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:30.993282  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:30.993296  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:31.069346  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:31.069384  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:27.642074  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:30.142170  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:31.184534  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.184696  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.606755  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:33.619163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:33.619232  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:33.654390  993585 cri.go:89] found id: ""
	I0120 12:33:33.654423  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.654432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:33.654438  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:33.654487  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:33.689183  993585 cri.go:89] found id: ""
	I0120 12:33:33.689218  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.689230  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:33.689239  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:33.689302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:33.720803  993585 cri.go:89] found id: ""
	I0120 12:33:33.720832  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.720839  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:33.720845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:33.720893  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:33.755948  993585 cri.go:89] found id: ""
	I0120 12:33:33.755985  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.755995  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:33.756003  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:33.756071  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:33.788407  993585 cri.go:89] found id: ""
	I0120 12:33:33.788444  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.788457  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:33.788466  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:33.788524  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:33.819077  993585 cri.go:89] found id: ""
	I0120 12:33:33.819102  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.819109  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:33.819115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:33.819164  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:33.848263  993585 cri.go:89] found id: ""
	I0120 12:33:33.848288  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.848296  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:33.848301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:33.848347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:33.877393  993585 cri.go:89] found id: ""
	I0120 12:33:33.877428  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.877439  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:33.877451  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:33.877462  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:33.928766  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:33.928796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:33.941450  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:33.941474  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:34.004416  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:34.004446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:34.004461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:34.079056  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:34.079088  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:36.622644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:36.634862  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:36.634939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:36.670074  993585 cri.go:89] found id: ""
	I0120 12:33:36.670113  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.670124  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:36.670132  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:36.670189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:36.706117  993585 cri.go:89] found id: ""
	I0120 12:33:36.706152  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.706159  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:36.706164  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:36.706219  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:36.741133  993585 cri.go:89] found id: ""
	I0120 12:33:36.741167  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.741177  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:36.741185  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:36.741242  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:36.773791  993585 cri.go:89] found id: ""
	I0120 12:33:36.773819  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.773830  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:36.773837  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:36.773901  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:36.807401  993585 cri.go:89] found id: ""
	I0120 12:33:36.807432  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.807440  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:36.807447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:36.807500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:36.839815  993585 cri.go:89] found id: ""
	I0120 12:33:36.839850  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.839861  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:36.839870  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:36.839934  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:36.868579  993585 cri.go:89] found id: ""
	I0120 12:33:36.868610  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.868620  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:36.868626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:36.868685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:36.898430  993585 cri.go:89] found id: ""
	I0120 12:33:36.898455  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.898462  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:36.898475  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:36.898490  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:36.947718  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:36.947758  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:32.641645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.141557  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.141719  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.684708  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.685419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:36.962705  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:36.962740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:37.053761  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:37.053792  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:37.053805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:37.148364  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:37.148400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.690060  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:39.702447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:39.702516  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:39.733846  993585 cri.go:89] found id: ""
	I0120 12:33:39.733868  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.733876  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:39.733883  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:39.733939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:39.762657  993585 cri.go:89] found id: ""
	I0120 12:33:39.762682  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.762690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:39.762695  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:39.762743  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:39.794803  993585 cri.go:89] found id: ""
	I0120 12:33:39.794832  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.794841  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:39.794847  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:39.794891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:39.823584  993585 cri.go:89] found id: ""
	I0120 12:33:39.823614  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.823625  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:39.823633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:39.823689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:39.851954  993585 cri.go:89] found id: ""
	I0120 12:33:39.851978  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.851985  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:39.851991  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:39.852091  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:39.881315  993585 cri.go:89] found id: ""
	I0120 12:33:39.881347  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.881358  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:39.881367  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:39.881428  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:39.911797  993585 cri.go:89] found id: ""
	I0120 12:33:39.911827  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.911836  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:39.911841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:39.911887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:39.941625  993585 cri.go:89] found id: ""
	I0120 12:33:39.941653  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.941661  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:39.941671  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:39.941683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:39.991689  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:39.991718  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:40.004850  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:40.004871  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:40.069863  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:40.069883  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:40.069894  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:40.149093  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:40.149129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.142612  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.145567  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:40.184106  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:42.184765  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.582218  992109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.61991226s)
	I0120 12:33:41.582297  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:33:41.597367  992109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:33:41.606890  992109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:33:41.615799  992109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:33:41.615823  992109 kubeadm.go:157] found existing configuration files:
	
	I0120 12:33:41.615890  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:33:41.624548  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:33:41.624613  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:33:41.634296  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:33:41.645019  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:33:41.645069  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:33:41.653988  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.662620  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:33:41.662661  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.671164  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:33:41.679068  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:33:41.679121  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:33:41.687730  992109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:33:41.842158  992109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:33:42.692596  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:42.710550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:42.710636  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:42.761626  993585 cri.go:89] found id: ""
	I0120 12:33:42.761665  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.761677  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:42.761685  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:42.761753  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:42.825148  993585 cri.go:89] found id: ""
	I0120 12:33:42.825181  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.825191  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:42.825196  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:42.825258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:42.859035  993585 cri.go:89] found id: ""
	I0120 12:33:42.859066  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.859075  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:42.859081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:42.859134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:42.890335  993585 cri.go:89] found id: ""
	I0120 12:33:42.890364  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.890372  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:42.890378  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:42.890442  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:42.929857  993585 cri.go:89] found id: ""
	I0120 12:33:42.929882  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.929890  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:42.929896  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:42.929944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:42.960830  993585 cri.go:89] found id: ""
	I0120 12:33:42.960864  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.960874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:42.960882  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:42.960948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:42.995324  993585 cri.go:89] found id: ""
	I0120 12:33:42.995354  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.995368  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:42.995374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:42.995424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:43.028259  993585 cri.go:89] found id: ""
	I0120 12:33:43.028286  993585 logs.go:282] 0 containers: []
	W0120 12:33:43.028294  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:43.028306  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:43.028316  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:43.079487  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:43.079517  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.091452  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:43.091475  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:43.153152  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:43.153178  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:43.153192  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:43.236284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:43.236325  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:45.774706  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:45.791967  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:45.792052  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:45.824678  993585 cri.go:89] found id: ""
	I0120 12:33:45.824710  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.824720  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:45.824729  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:45.824793  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:45.857843  993585 cri.go:89] found id: ""
	I0120 12:33:45.857876  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.857885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:45.857891  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:45.857944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:45.898182  993585 cri.go:89] found id: ""
	I0120 12:33:45.898215  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.898227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:45.898235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:45.898302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:45.929223  993585 cri.go:89] found id: ""
	I0120 12:33:45.929259  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.929272  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:45.929282  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:45.929380  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:45.960800  993585 cri.go:89] found id: ""
	I0120 12:33:45.960849  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.960870  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:45.960879  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:45.960957  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:45.997846  993585 cri.go:89] found id: ""
	I0120 12:33:45.997878  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.997889  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:45.997897  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:45.997969  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:46.033227  993585 cri.go:89] found id: ""
	I0120 12:33:46.033267  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.033278  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:46.033286  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:46.033354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:46.066691  993585 cri.go:89] found id: ""
	I0120 12:33:46.066723  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.066733  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:46.066746  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:46.066763  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:46.133257  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:46.133280  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:46.133293  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:46.232667  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:46.232720  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:46.274332  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:46.274371  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:46.327098  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:46.327142  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.642109  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:45.643138  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:44.686233  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:47.185408  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.186465  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.627545  992109 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:33:49.627631  992109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:33:49.627743  992109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:33:49.627898  992109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:33:49.628021  992109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:33:49.628110  992109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:33:49.629521  992109 out.go:235]   - Generating certificates and keys ...
	I0120 12:33:49.629586  992109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:33:49.629652  992109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:33:49.629732  992109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:33:49.629811  992109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:33:49.629945  992109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:33:49.630101  992109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:33:49.630179  992109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:33:49.630255  992109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:33:49.630331  992109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:33:49.630426  992109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:33:49.630491  992109 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:33:49.630586  992109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:33:49.630669  992109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:33:49.630752  992109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:33:49.630819  992109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:33:49.630898  992109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:33:49.630946  992109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:33:49.631065  992109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:33:49.631148  992109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:33:49.632352  992109 out.go:235]   - Booting up control plane ...
	I0120 12:33:49.632439  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:33:49.632500  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:33:49.632581  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:33:49.632734  992109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:33:49.632818  992109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:33:49.632854  992109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:33:49.632972  992109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:33:49.633093  992109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:33:49.633183  992109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.459324ms
	I0120 12:33:49.633288  992109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:33:49.633376  992109 kubeadm.go:310] [api-check] The API server is healthy after 5.002077681s
	I0120 12:33:49.633495  992109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:33:49.633603  992109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:33:49.633652  992109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:33:49.633813  992109 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-496524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:33:49.633900  992109 kubeadm.go:310] [bootstrap-token] Using token: sww9nb.rwz5issf9tlw104y
	I0120 12:33:49.635315  992109 out.go:235]   - Configuring RBAC rules ...
	I0120 12:33:49.635441  992109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:33:49.635546  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:33:49.635673  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:33:49.635790  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:33:49.635890  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:33:49.635965  992109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:33:49.636063  992109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:33:49.636105  992109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:33:49.636151  992109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:33:49.636157  992109 kubeadm.go:310] 
	I0120 12:33:49.636247  992109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:33:49.636272  992109 kubeadm.go:310] 
	I0120 12:33:49.636388  992109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:33:49.636400  992109 kubeadm.go:310] 
	I0120 12:33:49.636441  992109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:33:49.636523  992109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:33:49.636598  992109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:33:49.636608  992109 kubeadm.go:310] 
	I0120 12:33:49.636714  992109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:33:49.636738  992109 kubeadm.go:310] 
	I0120 12:33:49.636800  992109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:33:49.636810  992109 kubeadm.go:310] 
	I0120 12:33:49.636874  992109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:33:49.636984  992109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:33:49.637071  992109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:33:49.637082  992109 kubeadm.go:310] 
	I0120 12:33:49.637206  992109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:33:49.637348  992109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:33:49.637365  992109 kubeadm.go:310] 
	I0120 12:33:49.637484  992109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.637627  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:33:49.637685  992109 kubeadm.go:310] 	--control-plane 
	I0120 12:33:49.637704  992109 kubeadm.go:310] 
	I0120 12:33:49.637810  992109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:33:49.637826  992109 kubeadm.go:310] 
	I0120 12:33:49.637934  992109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.638086  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:33:49.638103  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:33:49.638112  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:33:49.639791  992109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:33:49.641114  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:33:49.651726  992109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:33:49.670543  992109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:33:49.670636  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.670688  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-496524 minikube.k8s.io/updated_at=2025_01_20T12_33_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-496524 minikube.k8s.io/primary=true
	I0120 12:33:49.704840  992109 ops.go:34] apiserver oom_adj: -16
	I0120 12:33:49.859209  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.359791  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.859509  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:48.841385  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:48.854037  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:48.854105  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:48.889959  993585 cri.go:89] found id: ""
	I0120 12:33:48.889996  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.890008  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:48.890017  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:48.890084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.926271  993585 cri.go:89] found id: ""
	I0120 12:33:48.926313  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.926326  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:48.926334  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:48.926409  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:48.962768  993585 cri.go:89] found id: ""
	I0120 12:33:48.962803  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.962816  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:48.962825  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:48.962895  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:48.998039  993585 cri.go:89] found id: ""
	I0120 12:33:48.998075  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.998086  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:48.998093  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:48.998161  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:49.038710  993585 cri.go:89] found id: ""
	I0120 12:33:49.038745  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.038756  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:49.038765  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:49.038835  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:49.074829  993585 cri.go:89] found id: ""
	I0120 12:33:49.074863  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.074874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:49.074883  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:49.074950  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:49.115354  993585 cri.go:89] found id: ""
	I0120 12:33:49.115383  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.115392  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:49.115397  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:49.115446  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:49.152837  993585 cri.go:89] found id: ""
	I0120 12:33:49.152870  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.152880  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:49.152892  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:49.152906  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:49.194817  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:49.194842  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:49.247223  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:49.247255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:49.259939  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:49.259965  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:49.326047  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:49.326081  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:49.326108  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:51.904391  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:51.916726  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:51.916806  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:51.950574  993585 cri.go:89] found id: ""
	I0120 12:33:51.950602  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.950610  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:51.950619  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:51.950683  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.141455  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:50.142912  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:51.359718  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:51.859742  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.359728  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.859803  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.359731  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.859729  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.963052  992109 kubeadm.go:1113] duration metric: took 4.292471944s to wait for elevateKubeSystemPrivileges
	I0120 12:33:53.963109  992109 kubeadm.go:394] duration metric: took 5m1.161906665s to StartCluster
	I0120 12:33:53.963139  992109 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.963257  992109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:33:53.964929  992109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.965243  992109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:33:53.965321  992109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:33:53.965437  992109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-496524"
	I0120 12:33:53.965452  992109 addons.go:69] Setting dashboard=true in profile "no-preload-496524"
	I0120 12:33:53.965477  992109 addons.go:238] Setting addon storage-provisioner=true in "no-preload-496524"
	W0120 12:33:53.965487  992109 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:33:53.965490  992109 addons.go:238] Setting addon dashboard=true in "no-preload-496524"
	I0120 12:33:53.965481  992109 addons.go:69] Setting default-storageclass=true in profile "no-preload-496524"
	W0120 12:33:53.965502  992109 addons.go:247] addon dashboard should already be in state true
	I0120 12:33:53.965518  992109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-496524"
	I0120 12:33:53.965520  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:33:53.965528  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965534  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965514  992109 addons.go:69] Setting metrics-server=true in profile "no-preload-496524"
	I0120 12:33:53.965570  992109 addons.go:238] Setting addon metrics-server=true in "no-preload-496524"
	W0120 12:33:53.965584  992109 addons.go:247] addon metrics-server should already be in state true
	I0120 12:33:53.965628  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965928  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965934  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965947  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965963  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.965985  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966029  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.966054  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966065  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966567  992109 out.go:177] * Verifying Kubernetes components...
	I0120 12:33:53.967881  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:33:53.983553  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0120 12:33:53.984079  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.984654  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.984681  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.985111  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.985353  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:53.986475  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 12:33:53.986716  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0120 12:33:53.987021  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987492  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987571  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.987588  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.987741  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0120 12:33:53.987942  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.988075  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.988425  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988440  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988577  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.988627  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.988783  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988797  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988855  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989000  992109 addons.go:238] Setting addon default-storageclass=true in "no-preload-496524"
	W0120 12:33:53.989019  992109 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:33:53.989052  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.989187  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989393  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989420  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989431  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989455  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989672  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989711  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.005609  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0120 12:33:54.006182  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.006760  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.006786  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.007131  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0120 12:33:54.007443  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.008065  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:54.008108  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.008308  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0120 12:33:54.008359  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.008993  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.009021  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.009407  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.009597  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.011591  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.013572  992109 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:33:54.014814  992109 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:33:54.015103  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.015538  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.015562  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.015921  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:33:54.015946  992109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:33:54.015970  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.015997  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.016619  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.018868  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.019948  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020370  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.020397  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020522  992109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:33:54.020716  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.020885  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.020989  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.021095  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.021561  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:33:54.021576  992109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:33:54.021592  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.024577  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.024641  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024669  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.024695  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024723  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.024878  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.025140  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.032584  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0120 12:33:54.032936  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.033474  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.033497  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.033809  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.034011  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.035349  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.035539  992109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.035557  992109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:33:54.035573  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.037812  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038056  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.038080  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038193  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.038321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.038429  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.038547  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.041727  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0120 12:33:54.042162  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.042671  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.042694  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.043048  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.043263  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.044523  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.046748  992109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:33:51.190620  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:53.685783  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:54.048049  992109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.048070  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:33:54.048087  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.050560  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051116  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.051143  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051300  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.051493  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.051649  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.051769  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.174035  992109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:33:54.197637  992109 node_ready.go:35] waiting up to 6m0s for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210713  992109 node_ready.go:49] node "no-preload-496524" has status "Ready":"True"
	I0120 12:33:54.210742  992109 node_ready.go:38] duration metric: took 13.074849ms for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210757  992109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:54.218615  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:54.300046  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:33:54.300080  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:33:54.351225  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.353768  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:33:54.353789  992109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:33:54.368467  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:33:54.368496  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:33:54.371467  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.389639  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:33:54.389660  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:33:54.401448  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.401467  992109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:33:54.465233  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.465824  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:33:54.465853  992109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:33:54.543139  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:33:54.543178  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:33:54.687210  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:33:54.687234  992109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:33:54.744978  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:33:54.745012  992109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:33:54.771298  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:33:54.771332  992109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:33:54.852878  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:33:54.852914  992109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:33:54.886329  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:54.886362  992109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:33:54.964102  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:55.906127  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.534613086s)
	I0120 12:33:55.906207  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906212  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.554946671s)
	I0120 12:33:55.906270  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.440998293s)
	I0120 12:33:55.906220  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906307  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906338  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906275  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906404  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906812  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.906854  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906855  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906862  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906874  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906877  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906883  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906886  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906893  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907039  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907058  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.907081  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.907090  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907187  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.907189  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907213  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908759  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.908766  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.908783  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908801  992109 addons.go:479] Verifying addon metrics-server=true in "no-preload-496524"
	I0120 12:33:55.909118  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.909137  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.939415  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.939434  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.939756  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.939772  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.225171  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.900293  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.936108167s)
	I0120 12:33:56.900402  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900428  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.900904  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.900913  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:56.900924  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.900945  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900952  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.901226  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.901246  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.902642  992109 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-496524 addons enable metrics-server
	
	I0120 12:33:56.904289  992109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0120 12:33:51.982905  993585 cri.go:89] found id: ""
	I0120 12:33:51.982931  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.982939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:51.982950  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:51.982998  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:52.017989  993585 cri.go:89] found id: ""
	I0120 12:33:52.018029  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.018041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:52.018049  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:52.018117  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:52.050405  993585 cri.go:89] found id: ""
	I0120 12:33:52.050432  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.050442  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:52.050450  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:52.050540  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:52.080729  993585 cri.go:89] found id: ""
	I0120 12:33:52.080760  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.080767  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:52.080773  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:52.080826  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:52.110809  993585 cri.go:89] found id: ""
	I0120 12:33:52.110839  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.110849  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:52.110856  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:52.110915  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:52.143357  993585 cri.go:89] found id: ""
	I0120 12:33:52.143387  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.143397  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:52.143405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:52.143475  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:52.179555  993585 cri.go:89] found id: ""
	I0120 12:33:52.179584  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.179594  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:52.179607  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:52.179622  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:52.268223  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:52.268257  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.304968  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:52.305008  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:52.354773  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:52.354811  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:52.366909  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:52.366933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:52.434038  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:54.934844  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:54.954370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:54.954453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:54.987088  993585 cri.go:89] found id: ""
	I0120 12:33:54.987124  993585 logs.go:282] 0 containers: []
	W0120 12:33:54.987136  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:54.987144  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:54.987207  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:55.020248  993585 cri.go:89] found id: ""
	I0120 12:33:55.020282  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.020293  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:55.020301  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:55.020374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:55.059488  993585 cri.go:89] found id: ""
	I0120 12:33:55.059529  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.059541  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:55.059550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:55.059614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:55.095049  993585 cri.go:89] found id: ""
	I0120 12:33:55.095088  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.095102  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:55.095112  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:55.095189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:55.131993  993585 cri.go:89] found id: ""
	I0120 12:33:55.132028  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.132039  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:55.132045  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:55.132107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:55.168716  993585 cri.go:89] found id: ""
	I0120 12:33:55.168744  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.168755  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:55.168764  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:55.168828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:55.211532  993585 cri.go:89] found id: ""
	I0120 12:33:55.211566  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.211578  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:55.211591  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:55.211658  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:55.245961  993585 cri.go:89] found id: ""
	I0120 12:33:55.245993  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.246004  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:55.246019  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:55.246036  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:55.297819  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:55.297865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:55.314469  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:55.314514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:55.386489  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:55.386544  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:55.386566  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:55.466897  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:55.466954  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.642467  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.143921  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.686287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.185263  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.905477  992109 addons.go:514] duration metric: took 2.940174389s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0120 12:33:57.224557  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.224585  992109 pod_ready.go:82] duration metric: took 3.005934718s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.224599  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.228981  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.228999  992109 pod_ready.go:82] duration metric: took 4.392102ms for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.229007  992109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:59.239998  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.014588  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:58.032828  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:58.032905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:58.075631  993585 cri.go:89] found id: ""
	I0120 12:33:58.075671  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.075774  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:58.075801  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:58.075887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:58.117897  993585 cri.go:89] found id: ""
	I0120 12:33:58.117934  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.117945  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:58.117953  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:58.118022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:58.161106  993585 cri.go:89] found id: ""
	I0120 12:33:58.161138  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.161149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:58.161157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:58.161222  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:58.203869  993585 cri.go:89] found id: ""
	I0120 12:33:58.203905  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.203915  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:58.203923  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:58.203991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:58.247905  993585 cri.go:89] found id: ""
	I0120 12:33:58.247938  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.247949  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:58.247956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:58.248016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:58.281395  993585 cri.go:89] found id: ""
	I0120 12:33:58.281426  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.281437  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:58.281445  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:58.281506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:58.318950  993585 cri.go:89] found id: ""
	I0120 12:33:58.318982  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.318991  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:58.318996  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:58.319055  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:58.351052  993585 cri.go:89] found id: ""
	I0120 12:33:58.351080  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.351089  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:58.351107  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:58.351134  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:58.363459  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:58.363489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:58.427460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:58.427502  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:58.427520  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:58.502031  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:58.502065  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:58.539404  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:58.539434  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.093414  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:01.106353  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:01.106422  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:01.145552  993585 cri.go:89] found id: ""
	I0120 12:34:01.145588  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.145601  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:01.145610  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:01.145678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:01.179253  993585 cri.go:89] found id: ""
	I0120 12:34:01.179288  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.179299  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:01.179307  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:01.179374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:01.215878  993585 cri.go:89] found id: ""
	I0120 12:34:01.215916  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.215928  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:01.215937  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:01.216001  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:01.260751  993585 cri.go:89] found id: ""
	I0120 12:34:01.260783  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.260795  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:01.260807  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:01.260883  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:01.303022  993585 cri.go:89] found id: ""
	I0120 12:34:01.303053  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.303065  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:01.303074  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:01.303145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:01.342483  993585 cri.go:89] found id: ""
	I0120 12:34:01.342539  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.342552  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:01.342562  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:01.342642  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:01.374569  993585 cri.go:89] found id: ""
	I0120 12:34:01.374608  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.374618  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:01.374633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:01.374696  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:01.406807  993585 cri.go:89] found id: ""
	I0120 12:34:01.406838  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.406848  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:01.406862  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:01.406887  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:01.446081  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:01.446111  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.498826  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:01.498865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:01.512333  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:01.512370  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:01.591631  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:01.591658  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:01.591676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:57.641818  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.141288  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.142885  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.685449  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.688229  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:01.734840  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:03.790112  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:04.235638  992109 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.235671  992109 pod_ready.go:82] duration metric: took 7.006654161s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.235686  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240203  992109 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.240233  992109 pod_ready.go:82] duration metric: took 4.537744ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240248  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244405  992109 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.244431  992109 pod_ready.go:82] duration metric: took 4.172774ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244445  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248277  992109 pod_ready.go:93] pod "kube-proxy-dpn56" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.248303  992109 pod_ready.go:82] duration metric: took 3.849341ms for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248315  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.251995  992109 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.252016  992109 pod_ready.go:82] duration metric: took 3.69304ms for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.252025  992109 pod_ready.go:39] duration metric: took 10.041253574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:04.252040  992109 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:04.252101  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.288797  992109 api_server.go:72] duration metric: took 10.323505838s to wait for apiserver process to appear ...
	I0120 12:34:04.288829  992109 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:04.288878  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:34:04.297424  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:34:04.299152  992109 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:04.299176  992109 api_server.go:131] duration metric: took 10.340981ms to wait for apiserver health ...
	I0120 12:34:04.299188  992109 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:04.437151  992109 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:04.437187  992109 system_pods.go:61] "coredns-668d6bf9bc-8pf2c" [9402090c-afdc-4fd7-a673-155ca87b9afe] Running
	I0120 12:34:04.437194  992109 system_pods.go:61] "coredns-668d6bf9bc-rdj6t" [f7882da6-0b57-402a-a902-6c4e6a8c6cd1] Running
	I0120 12:34:04.437200  992109 system_pods.go:61] "etcd-no-preload-496524" [430610d7-4491-4d35-93d6-71738b1cad0f] Running
	I0120 12:34:04.437205  992109 system_pods.go:61] "kube-apiserver-no-preload-496524" [d028d3c0-5ee8-46cc-b8e5-95f7d07e43ca] Running
	I0120 12:34:04.437210  992109 system_pods.go:61] "kube-controller-manager-no-preload-496524" [b11b36da-c5a3-4fc6-8619-4f12fda64f63] Running
	I0120 12:34:04.437215  992109 system_pods.go:61] "kube-proxy-dpn56" [dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4] Running
	I0120 12:34:04.437219  992109 system_pods.go:61] "kube-scheduler-no-preload-496524" [80058f6c-526c-487f-82a5-74df5f2e0174] Running
	I0120 12:34:04.437227  992109 system_pods.go:61] "metrics-server-f79f97bbb-dbx78" [c8fb707c-75c2-42b6-802e-52a09222f9ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:04.437234  992109 system_pods.go:61] "storage-provisioner" [14187f8e-01fd-45ac-a749-82ba272b727f] Running
	I0120 12:34:04.437246  992109 system_pods.go:74] duration metric: took 138.05086ms to wait for pod list to return data ...
	I0120 12:34:04.437257  992109 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:04.636609  992109 default_sa.go:45] found service account: "default"
	I0120 12:34:04.636747  992109 default_sa.go:55] duration metric: took 199.476374ms for default service account to be created ...
	I0120 12:34:04.636770  992109 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:04.836002  992109 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:04.171834  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.189904  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:04.189975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:04.227671  993585 cri.go:89] found id: ""
	I0120 12:34:04.227705  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.227717  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:04.227725  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:04.227789  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:04.266288  993585 cri.go:89] found id: ""
	I0120 12:34:04.266319  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.266329  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:04.266337  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:04.266415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:04.303909  993585 cri.go:89] found id: ""
	I0120 12:34:04.303944  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.303952  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:04.303965  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:04.304029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:04.342095  993585 cri.go:89] found id: ""
	I0120 12:34:04.342135  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.342148  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:04.342156  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:04.342220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:04.374237  993585 cri.go:89] found id: ""
	I0120 12:34:04.374268  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.374290  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:04.374299  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:04.374383  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:04.407930  993585 cri.go:89] found id: ""
	I0120 12:34:04.407962  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.407973  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:04.407981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:04.408047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:04.444108  993585 cri.go:89] found id: ""
	I0120 12:34:04.444133  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.444140  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:04.444146  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:04.444208  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:04.482725  993585 cri.go:89] found id: ""
	I0120 12:34:04.482759  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.482770  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:04.482783  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:04.482796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:04.536692  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:04.536732  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:04.549928  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:04.549952  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:04.616622  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:04.616645  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:04.616661  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:04.701813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:04.701846  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:04.642669  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:05.136388  992635 pod_ready.go:82] duration metric: took 4m0.000888072s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:05.136424  992635 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:05.136487  992635 pod_ready.go:39] duration metric: took 4m15.539523942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:05.136548  992635 kubeadm.go:597] duration metric: took 4m23.239372129s to restartPrimaryControlPlane
	W0120 12:34:05.136646  992635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:05.136701  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:05.185480  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.185630  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:09.185867  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.245120  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:07.257846  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:07.257917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:07.293851  993585 cri.go:89] found id: ""
	I0120 12:34:07.293885  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.293898  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:07.293906  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:07.293970  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:07.328532  993585 cri.go:89] found id: ""
	I0120 12:34:07.328568  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.328579  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:07.328588  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:07.328652  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:07.362019  993585 cri.go:89] found id: ""
	I0120 12:34:07.362053  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.362065  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:07.362073  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:07.362136  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:07.394170  993585 cri.go:89] found id: ""
	I0120 12:34:07.394211  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.394223  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:07.394231  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:07.394303  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:07.426650  993585 cri.go:89] found id: ""
	I0120 12:34:07.426694  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.426711  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:07.426719  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:07.426786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:07.472659  993585 cri.go:89] found id: ""
	I0120 12:34:07.472695  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.472706  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:07.472715  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:07.472788  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:07.506741  993585 cri.go:89] found id: ""
	I0120 12:34:07.506768  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.506777  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:07.506782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:07.506845  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:07.543976  993585 cri.go:89] found id: ""
	I0120 12:34:07.544007  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.544017  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:07.544028  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:07.544039  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:07.618073  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:07.618109  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:07.633284  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:07.633332  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:07.703104  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:07.703134  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:07.703151  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:07.786367  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:07.786404  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:10.324611  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:10.337443  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:10.337513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:10.371387  993585 cri.go:89] found id: ""
	I0120 12:34:10.371421  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.371432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:10.371489  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:10.371545  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:10.403803  993585 cri.go:89] found id: ""
	I0120 12:34:10.403829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.403837  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:10.403843  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:10.403891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:10.434806  993585 cri.go:89] found id: ""
	I0120 12:34:10.434829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.434836  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:10.434841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:10.434897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:10.465821  993585 cri.go:89] found id: ""
	I0120 12:34:10.465849  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.465856  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:10.465861  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:10.465905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:10.497007  993585 cri.go:89] found id: ""
	I0120 12:34:10.497029  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.497037  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:10.497043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:10.497086  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:10.527026  993585 cri.go:89] found id: ""
	I0120 12:34:10.527050  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.527060  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:10.527069  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:10.527134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:10.557590  993585 cri.go:89] found id: ""
	I0120 12:34:10.557621  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.557631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:10.557638  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:10.557694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:10.587747  993585 cri.go:89] found id: ""
	I0120 12:34:10.587777  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.587787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:10.587799  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:10.587813  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:10.635855  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:10.635886  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:10.649110  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:10.649147  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:10.719339  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:10.719382  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:10.719399  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:10.791808  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:10.791839  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:11.684329  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.686198  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.343317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:13.356667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:13.356731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:13.388894  993585 cri.go:89] found id: ""
	I0120 12:34:13.388926  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.388937  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:13.388944  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:13.389013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:13.419319  993585 cri.go:89] found id: ""
	I0120 12:34:13.419350  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.419360  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:13.419374  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:13.419440  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:13.451302  993585 cri.go:89] found id: ""
	I0120 12:34:13.451328  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.451335  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:13.451345  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:13.451398  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:13.485033  993585 cri.go:89] found id: ""
	I0120 12:34:13.485062  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.485073  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:13.485079  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:13.485126  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:13.515362  993585 cri.go:89] found id: ""
	I0120 12:34:13.515392  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.515401  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:13.515410  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:13.515481  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:13.545307  993585 cri.go:89] found id: ""
	I0120 12:34:13.545356  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.545366  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:13.545374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:13.545436  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:13.575714  993585 cri.go:89] found id: ""
	I0120 12:34:13.575738  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.575745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:13.575751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:13.575805  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:13.606046  993585 cri.go:89] found id: ""
	I0120 12:34:13.606099  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.606112  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:13.606127  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:13.606145  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:13.667543  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:13.667567  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:13.667584  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:13.741766  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:13.741795  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:13.778095  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:13.778131  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:13.830514  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:13.830554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.343728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:16.356586  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:16.356665  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:16.390098  993585 cri.go:89] found id: ""
	I0120 12:34:16.390132  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.390146  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:16.390155  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:16.390228  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:16.422651  993585 cri.go:89] found id: ""
	I0120 12:34:16.422682  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.422690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:16.422699  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:16.422755  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:16.455349  993585 cri.go:89] found id: ""
	I0120 12:34:16.455378  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.455390  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:16.455398  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:16.455467  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:16.494862  993585 cri.go:89] found id: ""
	I0120 12:34:16.494893  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.494904  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:16.494911  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:16.494975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:16.526039  993585 cri.go:89] found id: ""
	I0120 12:34:16.526070  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.526079  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:16.526087  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:16.526159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:16.557323  993585 cri.go:89] found id: ""
	I0120 12:34:16.557360  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.557376  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:16.557382  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:16.557444  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:16.607483  993585 cri.go:89] found id: ""
	I0120 12:34:16.607516  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.607527  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:16.607535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:16.607600  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:16.639620  993585 cri.go:89] found id: ""
	I0120 12:34:16.639644  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.639654  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:16.639665  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:16.639681  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:16.675471  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:16.675500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:16.726780  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:16.726814  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.739029  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:16.739060  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:16.802705  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:16.802738  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:16.802752  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:16.185205  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:18.685055  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:19.379610  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:19.392739  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:19.392813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:19.423927  993585 cri.go:89] found id: ""
	I0120 12:34:19.423959  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.423971  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:19.423979  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:19.424049  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:19.455104  993585 cri.go:89] found id: ""
	I0120 12:34:19.455131  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.455140  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:19.455145  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:19.455192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:19.487611  993585 cri.go:89] found id: ""
	I0120 12:34:19.487642  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.487652  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:19.487664  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:19.487728  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:19.517582  993585 cri.go:89] found id: ""
	I0120 12:34:19.517613  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.517638  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:19.517665  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:19.517734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:19.549138  993585 cri.go:89] found id: ""
	I0120 12:34:19.549177  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.549190  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:19.549199  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:19.549263  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:19.584290  993585 cri.go:89] found id: ""
	I0120 12:34:19.584317  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.584328  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:19.584334  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:19.584384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:19.618867  993585 cri.go:89] found id: ""
	I0120 12:34:19.618900  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.618909  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:19.618915  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:19.618967  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:19.651916  993585 cri.go:89] found id: ""
	I0120 12:34:19.651956  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.651968  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:19.651981  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:19.651997  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:19.691207  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:19.691239  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:19.742403  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:19.742436  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:19.755212  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:19.755245  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:19.818642  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:19.818671  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:19.818686  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:21.184740  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:23.685218  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:22.398142  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:22.415423  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:22.415497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:22.450558  993585 cri.go:89] found id: ""
	I0120 12:34:22.450595  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.450606  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:22.450613  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:22.450672  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:22.481655  993585 cri.go:89] found id: ""
	I0120 12:34:22.481686  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.481697  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:22.481706  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:22.481773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:22.515465  993585 cri.go:89] found id: ""
	I0120 12:34:22.515498  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.515509  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:22.515516  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:22.515575  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:22.546538  993585 cri.go:89] found id: ""
	I0120 12:34:22.546566  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.546575  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:22.546583  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:22.546640  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:22.577112  993585 cri.go:89] found id: ""
	I0120 12:34:22.577140  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.577151  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:22.577158  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:22.577216  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:22.610604  993585 cri.go:89] found id: ""
	I0120 12:34:22.610640  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.610650  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:22.610657  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:22.610718  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:22.641708  993585 cri.go:89] found id: ""
	I0120 12:34:22.641737  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.641745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:22.641752  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:22.641818  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:22.671952  993585 cri.go:89] found id: ""
	I0120 12:34:22.671977  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.671984  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:22.671994  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:22.672004  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:22.722515  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:22.722552  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:22.734806  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:22.734827  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:22.797517  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:22.797554  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:22.797573  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:22.872821  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:22.872851  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.413129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:25.425926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:25.426021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:25.462540  993585 cri.go:89] found id: ""
	I0120 12:34:25.462574  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.462584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:25.462595  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:25.462650  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:25.493646  993585 cri.go:89] found id: ""
	I0120 12:34:25.493672  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.493679  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:25.493688  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:25.493732  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:25.529070  993585 cri.go:89] found id: ""
	I0120 12:34:25.529103  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.529126  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:25.529135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:25.529199  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:25.562199  993585 cri.go:89] found id: ""
	I0120 12:34:25.562225  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.562258  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:25.562265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:25.562329  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:25.597698  993585 cri.go:89] found id: ""
	I0120 12:34:25.597728  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.597739  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:25.597745  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:25.597794  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:25.632923  993585 cri.go:89] found id: ""
	I0120 12:34:25.632950  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.632961  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:25.632968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:25.633031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:25.664379  993585 cri.go:89] found id: ""
	I0120 12:34:25.664409  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.664419  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:25.664434  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:25.664490  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:25.694965  993585 cri.go:89] found id: ""
	I0120 12:34:25.694992  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.695002  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:25.695014  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:25.695027  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:25.742956  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:25.742987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:25.755095  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:25.755122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:25.822777  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:25.822807  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:25.822824  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:25.895354  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:25.895389  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.685681  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.183977  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.433411  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:28.445691  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:28.445750  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:28.475915  993585 cri.go:89] found id: ""
	I0120 12:34:28.475949  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.475961  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:28.475969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:28.476029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:28.506219  993585 cri.go:89] found id: ""
	I0120 12:34:28.506253  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.506264  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:28.506272  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:28.506332  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:28.539662  993585 cri.go:89] found id: ""
	I0120 12:34:28.539693  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.539704  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:28.539712  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:28.539775  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:28.570360  993585 cri.go:89] found id: ""
	I0120 12:34:28.570390  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.570398  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:28.570404  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:28.570466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:28.599217  993585 cri.go:89] found id: ""
	I0120 12:34:28.599242  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.599249  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:28.599255  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:28.599310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:28.629325  993585 cri.go:89] found id: ""
	I0120 12:34:28.629366  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.629378  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:28.629386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:28.629453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:28.659625  993585 cri.go:89] found id: ""
	I0120 12:34:28.659657  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.659668  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:28.659675  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:28.659734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:28.695195  993585 cri.go:89] found id: ""
	I0120 12:34:28.695222  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.695232  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:28.695242  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:28.695255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:28.756910  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:28.756942  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:28.771902  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:28.771932  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:28.859464  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:28.859491  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:28.859510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:28.931739  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:28.931769  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.472251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:31.484961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:31.485019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:31.518142  993585 cri.go:89] found id: ""
	I0120 12:34:31.518175  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.518187  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:31.518194  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:31.518241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:31.550125  993585 cri.go:89] found id: ""
	I0120 12:34:31.550187  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.550201  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:31.550210  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:31.550274  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:31.583805  993585 cri.go:89] found id: ""
	I0120 12:34:31.583834  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.583846  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:31.583854  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:31.583908  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:31.626186  993585 cri.go:89] found id: ""
	I0120 12:34:31.626209  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.626217  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:31.626223  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:31.626271  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:31.657467  993585 cri.go:89] found id: ""
	I0120 12:34:31.657507  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.657519  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:31.657527  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:31.657594  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:31.686983  993585 cri.go:89] found id: ""
	I0120 12:34:31.687008  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.687015  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:31.687021  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:31.687075  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:31.721602  993585 cri.go:89] found id: ""
	I0120 12:34:31.721632  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.721645  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:31.721651  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:31.721701  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:31.751369  993585 cri.go:89] found id: ""
	I0120 12:34:31.751394  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.751401  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:31.751412  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:31.751435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:31.816285  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:31.816327  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:31.816344  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:31.891930  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:31.891969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.927472  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:31.927503  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:32.776819  992635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.640090134s)
	I0120 12:34:32.776911  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:34:32.792110  992635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:34:32.801453  992635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:34:32.809836  992635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:34:32.809855  992635 kubeadm.go:157] found existing configuration files:
	
	I0120 12:34:32.809892  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:34:32.817968  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:34:32.818014  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:34:32.826142  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:34:32.834058  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:34:32.834109  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:34:32.842776  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.850601  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:34:32.850645  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.858854  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:34:32.866819  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:34:32.866860  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:34:32.875193  992635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:34:32.920522  992635 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:34:32.920570  992635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:34:33.023871  992635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:34:33.024001  992635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:34:33.024134  992635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:34:33.032806  992635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:34:33.035443  992635 out.go:235]   - Generating certificates and keys ...
	I0120 12:34:33.035549  992635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:34:33.035644  992635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:34:33.035776  992635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:34:33.035886  992635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:34:33.035993  992635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:34:33.036086  992635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:34:33.037424  992635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:34:33.037490  992635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:34:33.037563  992635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:34:33.037649  992635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:34:33.037695  992635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:34:33.037750  992635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:34:33.105282  992635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:34:33.414668  992635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:34:33.727680  992635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:34:33.812741  992635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:34:33.984459  992635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:34:33.985140  992635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:34:33.988084  992635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:34:30.184978  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:32.185137  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:31.974997  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:31.975024  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.488614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:34.506548  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:34.506624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:34.563005  993585 cri.go:89] found id: ""
	I0120 12:34:34.563039  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.563052  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:34.563060  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:34.563124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:34.594244  993585 cri.go:89] found id: ""
	I0120 12:34:34.594284  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.594296  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:34.594304  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:34.594373  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:34.625619  993585 cri.go:89] found id: ""
	I0120 12:34:34.625654  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.625665  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:34.625673  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:34.625738  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:34.658589  993585 cri.go:89] found id: ""
	I0120 12:34:34.658619  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.658627  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:34.658635  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:34.658703  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:34.689254  993585 cri.go:89] found id: ""
	I0120 12:34:34.689283  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.689294  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:34.689301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:34.689361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:34.718991  993585 cri.go:89] found id: ""
	I0120 12:34:34.719017  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.719025  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:34.719032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:34.719087  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:34.755470  993585 cri.go:89] found id: ""
	I0120 12:34:34.755506  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.755517  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:34.755525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:34.755591  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:34.794468  993585 cri.go:89] found id: ""
	I0120 12:34:34.794511  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.794536  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:34.794550  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:34.794567  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:34.872224  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:34.872255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:34.906752  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:34.906782  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:34.958387  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:34.958418  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.970224  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:34.970247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:35.042447  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:33.990145  992635 out.go:235]   - Booting up control plane ...
	I0120 12:34:33.990278  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:34:33.990399  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:34:33.990496  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:34:34.010394  992635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:34:34.017815  992635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:34:34.017877  992635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:34:34.137419  992635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:34:34.137546  992635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:34:35.139769  992635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002196985s
	I0120 12:34:35.139867  992635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:34:34.685113  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:36.685852  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.185481  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.641165  992635 kubeadm.go:310] [api-check] The API server is healthy after 4.501397328s
	I0120 12:34:39.658614  992635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:34:40.171926  992635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:34:40.198719  992635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:34:40.198914  992635 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-987349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:34:40.207929  992635 kubeadm.go:310] [bootstrap-token] Using token: n4uhes.3ig136bhcqw1unce
	I0120 12:34:40.209373  992635 out.go:235]   - Configuring RBAC rules ...
	I0120 12:34:40.209504  992635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:34:40.213198  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:34:40.219884  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:34:40.223154  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:34:40.228539  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:34:40.232011  992635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:34:40.369420  992635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:34:40.817626  992635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:34:41.370167  992635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:34:41.371275  992635 kubeadm.go:310] 
	I0120 12:34:41.371411  992635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:34:41.371436  992635 kubeadm.go:310] 
	I0120 12:34:41.371547  992635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:34:41.371567  992635 kubeadm.go:310] 
	I0120 12:34:41.371607  992635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:34:41.371696  992635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:34:41.371776  992635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:34:41.371785  992635 kubeadm.go:310] 
	I0120 12:34:41.371870  992635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:34:41.371879  992635 kubeadm.go:310] 
	I0120 12:34:41.371946  992635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:34:41.371956  992635 kubeadm.go:310] 
	I0120 12:34:41.372030  992635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:34:41.372156  992635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:34:41.372262  992635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:34:41.372278  992635 kubeadm.go:310] 
	I0120 12:34:41.372392  992635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:34:41.372498  992635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:34:41.372507  992635 kubeadm.go:310] 
	I0120 12:34:41.372606  992635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.372783  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:34:41.372829  992635 kubeadm.go:310] 	--control-plane 
	I0120 12:34:41.372852  992635 kubeadm.go:310] 
	I0120 12:34:41.372972  992635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:34:41.372985  992635 kubeadm.go:310] 
	I0120 12:34:41.373076  992635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.373204  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:34:41.373662  992635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:34:41.373689  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:34:41.373703  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:34:41.375374  992635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:34:37.542589  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:37.559095  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:37.559165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:37.598316  993585 cri.go:89] found id: ""
	I0120 12:34:37.598348  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.598360  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:37.598369  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:37.598438  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:37.628599  993585 cri.go:89] found id: ""
	I0120 12:34:37.628633  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.628645  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:37.628652  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:37.628727  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:37.668373  993585 cri.go:89] found id: ""
	I0120 12:34:37.668415  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.668428  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:37.668436  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:37.668505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:37.708471  993585 cri.go:89] found id: ""
	I0120 12:34:37.708506  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.708517  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:37.708525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:37.708586  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:37.741568  993585 cri.go:89] found id: ""
	I0120 12:34:37.741620  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.741639  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:37.741647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:37.741722  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:37.774368  993585 cri.go:89] found id: ""
	I0120 12:34:37.774396  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.774406  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:37.774414  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:37.774482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:37.806996  993585 cri.go:89] found id: ""
	I0120 12:34:37.807031  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.807042  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:37.807050  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:37.807111  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:37.843251  993585 cri.go:89] found id: ""
	I0120 12:34:37.843285  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.843296  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:37.843317  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:37.843336  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:37.918915  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:37.918937  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:37.918949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:38.003693  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:38.003735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:38.044200  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:38.044228  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:38.098358  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:38.098396  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.611766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:40.625430  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:40.625513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:40.662291  993585 cri.go:89] found id: ""
	I0120 12:34:40.662328  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.662340  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:40.662348  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:40.662416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:40.700505  993585 cri.go:89] found id: ""
	I0120 12:34:40.700535  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.700543  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:40.700549  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:40.700621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:40.740098  993585 cri.go:89] found id: ""
	I0120 12:34:40.740156  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.740168  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:40.740177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:40.740246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:40.779511  993585 cri.go:89] found id: ""
	I0120 12:34:40.779538  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.779547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:40.779552  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:40.779602  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:40.814466  993585 cri.go:89] found id: ""
	I0120 12:34:40.814508  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.814539  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:40.814549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:40.814624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:40.848198  993585 cri.go:89] found id: ""
	I0120 12:34:40.848224  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.848233  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:40.848239  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:40.848295  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:40.881226  993585 cri.go:89] found id: ""
	I0120 12:34:40.881260  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.881273  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:40.881281  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:40.881345  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:40.914605  993585 cri.go:89] found id: ""
	I0120 12:34:40.914639  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.914649  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:40.914659  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:40.914671  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:40.967363  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:40.967401  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.981622  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:40.981655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:41.052041  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:41.052074  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:41.052089  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:41.136661  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:41.136699  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:41.376667  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:34:41.387591  992635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:34:41.405656  992635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:34:41.405748  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.405779  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-987349 minikube.k8s.io/updated_at=2025_01_20T12_34_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-987349 minikube.k8s.io/primary=true
	I0120 12:34:41.445579  992635 ops.go:34] apiserver oom_adj: -16
	I0120 12:34:41.593723  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:42.093899  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.685860  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:43.685895  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:42.593991  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.093847  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.594692  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.094458  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.594425  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.094074  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.201304  992635 kubeadm.go:1113] duration metric: took 3.795623962s to wait for elevateKubeSystemPrivileges
	I0120 12:34:45.201350  992635 kubeadm.go:394] duration metric: took 5m3.346037476s to StartCluster
	I0120 12:34:45.201376  992635 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.201474  992635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:34:45.204831  992635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.205103  992635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:34:45.205287  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:34:45.205236  992635 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:34:45.205342  992635 addons.go:69] Setting dashboard=true in profile "embed-certs-987349"
	I0120 12:34:45.205370  992635 addons.go:238] Setting addon dashboard=true in "embed-certs-987349"
	I0120 12:34:45.205355  992635 addons.go:69] Setting default-storageclass=true in profile "embed-certs-987349"
	I0120 12:34:45.205338  992635 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-987349"
	I0120 12:34:45.205375  992635 addons.go:69] Setting metrics-server=true in profile "embed-certs-987349"
	I0120 12:34:45.205395  992635 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-987349"
	W0120 12:34:45.205403  992635 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:34:45.205413  992635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-987349"
	I0120 12:34:45.205443  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205383  992635 addons.go:247] addon dashboard should already be in state true
	I0120 12:34:45.205402  992635 addons.go:238] Setting addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:45.205522  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205537  992635 addons.go:247] addon metrics-server should already be in state true
	I0120 12:34:45.205585  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.205843  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205869  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205889  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205900  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205939  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205984  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205987  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.206010  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.206677  992635 out.go:177] * Verifying Kubernetes components...
	I0120 12:34:45.208137  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:34:45.222507  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0120 12:34:45.222862  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0120 12:34:45.223151  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.223444  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0120 12:34:45.223795  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.223818  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.223841  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.224249  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224372  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.224394  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.224716  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0120 12:34:45.224739  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224840  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.224881  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225063  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225306  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.225342  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225362  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225864  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.225848  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.226299  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226361  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226579  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.226996  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.227044  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.230457  992635 addons.go:238] Setting addon default-storageclass=true in "embed-certs-987349"
	W0120 12:34:45.230485  992635 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:34:45.230516  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.230928  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.230994  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.245536  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0120 12:34:45.246137  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.246774  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.246800  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.246874  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0120 12:34:45.247488  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.247514  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247491  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0120 12:34:45.247884  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247991  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.248377  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248398  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.248650  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248676  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.249046  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249050  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249260  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.249453  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.250058  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.250219  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0120 12:34:45.250876  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.251417  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.251442  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.251975  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.252485  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.252527  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.252582  992635 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:34:45.252806  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253386  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253969  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:34:45.253998  992635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:34:45.254019  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.254034  992635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:34:45.254933  992635 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:34:45.255880  992635 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.255900  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:34:45.255918  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.258271  992635 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:34:43.674682  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:43.690652  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:43.690723  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:43.721291  993585 cri.go:89] found id: ""
	I0120 12:34:43.721323  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.721334  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:43.721342  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:43.721410  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:43.752041  993585 cri.go:89] found id: ""
	I0120 12:34:43.752065  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.752072  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:43.752078  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:43.752138  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:43.785868  993585 cri.go:89] found id: ""
	I0120 12:34:43.785901  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.785913  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:43.785920  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:43.785989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:43.815950  993585 cri.go:89] found id: ""
	I0120 12:34:43.815981  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.815991  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:43.815998  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:43.816051  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:43.846957  993585 cri.go:89] found id: ""
	I0120 12:34:43.846989  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.846998  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:43.847006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:43.847063  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:43.879933  993585 cri.go:89] found id: ""
	I0120 12:34:43.879961  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.879971  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:43.879979  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:43.880037  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:43.910895  993585 cri.go:89] found id: ""
	I0120 12:34:43.910922  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.910932  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:43.910940  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:43.911004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:43.940052  993585 cri.go:89] found id: ""
	I0120 12:34:43.940083  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.940092  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:43.940103  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:43.940119  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:43.992764  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:43.992797  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:44.004467  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:44.004489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:44.076395  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:44.076424  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:44.076440  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:44.155006  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:44.155051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:46.706685  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:46.720910  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:46.720986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:46.769398  993585 cri.go:89] found id: ""
	I0120 12:34:46.769438  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.769452  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:46.769461  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:46.769532  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:46.812658  993585 cri.go:89] found id: ""
	I0120 12:34:46.812692  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.812704  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:46.812712  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:46.812780  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:46.849224  993585 cri.go:89] found id: ""
	I0120 12:34:46.849260  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.849271  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:46.849278  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:46.849340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:46.880621  993585 cri.go:89] found id: ""
	I0120 12:34:46.880660  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.880672  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:46.880680  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:46.880754  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:46.917825  993585 cri.go:89] found id: ""
	I0120 12:34:46.917860  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.917872  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:46.917880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:46.917948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:46.953069  993585 cri.go:89] found id: ""
	I0120 12:34:46.953102  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.953114  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:46.953122  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:46.953210  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:45.258378  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.258973  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.259074  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.259447  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.259546  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:34:45.259555  992635 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:34:45.259566  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.259650  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.260023  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.260165  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.260401  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.260819  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.260837  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.261018  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.261123  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.261371  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.261498  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.263039  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263451  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.263466  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263718  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.263876  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.264027  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.264247  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.271639  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0120 12:34:45.272049  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.272492  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.272506  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.272861  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.273045  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.275220  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.275411  992635 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.275425  992635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:34:45.275441  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.278031  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278264  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.278284  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278459  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.278651  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.278797  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.278940  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.485223  992635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:34:45.512129  992635 node_ready.go:35] waiting up to 6m0s for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535766  992635 node_ready.go:49] node "embed-certs-987349" has status "Ready":"True"
	I0120 12:34:45.535800  992635 node_ready.go:38] duration metric: took 23.637811ms for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535816  992635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:45.546936  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:45.591884  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.672669  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:34:45.672696  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:34:45.706505  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:34:45.706552  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:34:45.719651  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:34:45.719685  992635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:34:45.797607  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.912193  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.912228  992635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:34:45.919037  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:34:45.919066  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:34:45.995504  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.999745  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:34:45.999769  992635 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:34:46.012312  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012340  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.012774  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.012805  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.012815  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012824  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.013169  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.013179  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.013190  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.039766  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.039787  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.040079  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.040141  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.040161  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.060472  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:34:46.060499  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:34:46.125182  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:34:46.125209  992635 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:34:46.163864  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:34:46.163897  992635 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:34:46.271512  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:34:46.271542  992635 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:34:46.315589  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:34:46.315615  992635 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:34:46.382800  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:46.382834  992635 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:34:46.471318  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:47.146418  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.348766384s)
	I0120 12:34:47.146477  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146493  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.146889  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.146910  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.146920  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146928  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.148865  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.148875  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.148885  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375249  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.379691916s)
	I0120 12:34:47.375330  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375349  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375787  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.375817  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375827  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375835  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375855  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.376085  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.376105  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.376121  992635 addons.go:479] Verifying addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:47.554735  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.098046  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626653683s)
	I0120 12:34:48.098124  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098144  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098568  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098628  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.098648  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098651  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:48.098663  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098945  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098958  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.100362  992635 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-987349 addons enable metrics-server
	
	I0120 12:34:48.101744  992635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:34:46.185138  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.185173  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:46.991590  993585 cri.go:89] found id: ""
	I0120 12:34:46.991624  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.991636  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:46.991643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:46.991709  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:47.026992  993585 cri.go:89] found id: ""
	I0120 12:34:47.027028  993585 logs.go:282] 0 containers: []
	W0120 12:34:47.027039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:47.027052  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:47.027070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:47.041560  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:47.041600  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:47.116950  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:47.116982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:47.116999  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:47.220147  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:47.220186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:47.261692  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:47.261735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:49.823725  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:49.837812  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:49.837891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:49.870910  993585 cri.go:89] found id: ""
	I0120 12:34:49.870942  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.870954  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:49.870974  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:49.871038  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:49.901938  993585 cri.go:89] found id: ""
	I0120 12:34:49.901971  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.901983  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:49.901991  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:49.902050  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:49.934859  993585 cri.go:89] found id: ""
	I0120 12:34:49.934895  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.934908  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:49.934916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:49.934978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:49.969109  993585 cri.go:89] found id: ""
	I0120 12:34:49.969144  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.969152  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:49.969159  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:49.969215  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:50.000593  993585 cri.go:89] found id: ""
	I0120 12:34:50.000624  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.000634  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:50.000644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:50.000704  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:50.031935  993585 cri.go:89] found id: ""
	I0120 12:34:50.031956  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.031963  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:50.031968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:50.032013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:50.066876  993585 cri.go:89] found id: ""
	I0120 12:34:50.066904  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.066914  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:50.066922  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:50.066980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:50.099413  993585 cri.go:89] found id: ""
	I0120 12:34:50.099440  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.099448  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:50.099458  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:50.099469  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:50.147538  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:50.147565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:50.159202  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:50.159227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:50.233169  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:50.233201  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:50.233218  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:50.313297  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:50.313331  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:48.102973  992635 addons.go:514] duration metric: took 2.897750546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:34:50.054643  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:50.685136  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:53.185766  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:52.849232  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:52.863600  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:52.863668  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:52.897114  993585 cri.go:89] found id: ""
	I0120 12:34:52.897146  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.897158  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:52.897166  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:52.897235  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:52.931572  993585 cri.go:89] found id: ""
	I0120 12:34:52.931608  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.931621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:52.931631  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:52.931699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:52.967427  993585 cri.go:89] found id: ""
	I0120 12:34:52.967464  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.967477  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:52.967485  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:52.967550  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:53.004996  993585 cri.go:89] found id: ""
	I0120 12:34:53.005036  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.005045  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:53.005052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:53.005130  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:53.042883  993585 cri.go:89] found id: ""
	I0120 12:34:53.042920  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.042932  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:53.042941  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:53.043012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:53.081504  993585 cri.go:89] found id: ""
	I0120 12:34:53.081548  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.081560  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:53.081569  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:53.081638  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:53.116486  993585 cri.go:89] found id: ""
	I0120 12:34:53.116526  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.116537  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:53.116546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:53.116621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:53.150011  993585 cri.go:89] found id: ""
	I0120 12:34:53.150044  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.150055  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:53.150068  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:53.150082  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:53.236271  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:53.236314  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:53.272793  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:53.272823  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:53.328164  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:53.328210  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:53.342124  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:53.342159  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:53.436951  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:55.938662  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:55.954006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:55.954080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:55.995805  993585 cri.go:89] found id: ""
	I0120 12:34:55.995836  993585 logs.go:282] 0 containers: []
	W0120 12:34:55.995847  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:55.995855  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:55.995922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:56.037391  993585 cri.go:89] found id: ""
	I0120 12:34:56.037422  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.037431  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:56.037440  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:56.037500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:56.073395  993585 cri.go:89] found id: ""
	I0120 12:34:56.073432  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.073444  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:56.073452  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:56.073521  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:56.113060  993585 cri.go:89] found id: ""
	I0120 12:34:56.113095  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.113106  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:56.113114  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:56.113192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:56.149448  993585 cri.go:89] found id: ""
	I0120 12:34:56.149481  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.149492  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:56.149501  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:56.149565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:56.188193  993585 cri.go:89] found id: ""
	I0120 12:34:56.188222  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.188232  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:56.188241  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:56.188305  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:56.229490  993585 cri.go:89] found id: ""
	I0120 12:34:56.229520  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.229530  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:56.229538  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:56.229596  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:56.268312  993585 cri.go:89] found id: ""
	I0120 12:34:56.268342  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.268355  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:56.268368  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:56.268382  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:56.362946  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:56.362970  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:56.362987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:56.449009  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:56.449049  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:56.497349  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:56.497393  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:56.552829  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:56.552864  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:52.555092  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.555118  992635 pod_ready.go:82] duration metric: took 7.008153036s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.555129  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559701  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.559730  992635 pod_ready.go:82] duration metric: took 4.593756ms for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559743  992635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564650  992635 pod_ready.go:93] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.564677  992635 pod_ready.go:82] duration metric: took 4.924851ms for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564690  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568924  992635 pod_ready.go:93] pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.568947  992635 pod_ready.go:82] duration metric: took 4.248574ms for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568959  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573555  992635 pod_ready.go:93] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.573574  992635 pod_ready.go:82] duration metric: took 4.607213ms for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573582  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951750  992635 pod_ready.go:93] pod "kube-proxy-xrg5x" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.951777  992635 pod_ready.go:82] duration metric: took 378.189084ms for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951787  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352358  992635 pod_ready.go:93] pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:53.352397  992635 pod_ready.go:82] duration metric: took 400.600706ms for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352410  992635 pod_ready.go:39] duration metric: took 7.816579945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:53.352431  992635 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:53.352497  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:53.385445  992635 api_server.go:72] duration metric: took 8.18029522s to wait for apiserver process to appear ...
	I0120 12:34:53.385483  992635 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:53.385512  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:34:53.390273  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 200:
	ok
	I0120 12:34:53.391546  992635 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:53.391569  992635 api_server.go:131] duration metric: took 6.078483ms to wait for apiserver health ...
	I0120 12:34:53.391576  992635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:53.555192  992635 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:53.555222  992635 system_pods.go:61] "coredns-668d6bf9bc-cf5ts" [91648c6f-7cef-427f-82f3-7572a9b5d80e] Running
	I0120 12:34:53.555227  992635 system_pods.go:61] "coredns-668d6bf9bc-gr6pw" [6ff16a87-0a5e-4d82-b13d-2c72afef6dc0] Running
	I0120 12:34:53.555231  992635 system_pods.go:61] "etcd-embed-certs-987349" [5a54b1fe-f8d1-43c6-a430-a37fa3fa04b7] Running
	I0120 12:34:53.555235  992635 system_pods.go:61] "kube-apiserver-embed-certs-987349" [3e1da80d-0a1d-44bb-945d-534b91eebb95] Running
	I0120 12:34:53.555241  992635 system_pods.go:61] "kube-controller-manager-embed-certs-987349" [e1f4800a-ff08-4ea5-8134-81130f2d8f3d] Running
	I0120 12:34:53.555245  992635 system_pods.go:61] "kube-proxy-xrg5x" [a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7] Running
	I0120 12:34:53.555248  992635 system_pods.go:61] "kube-scheduler-embed-certs-987349" [d35e4dae-055f-4db7-b807-5767fa324498] Running
	I0120 12:34:53.555257  992635 system_pods.go:61] "metrics-server-f79f97bbb-4vcgc" [2108ac96-d8cd-429f-ac2d-babc6d97267b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:53.555262  992635 system_pods.go:61] "storage-provisioner" [953b33a8-d2a0-447d-a01b-49350c6555f7] Running
	I0120 12:34:53.555270  992635 system_pods.go:74] duration metric: took 163.687709ms to wait for pod list to return data ...
	I0120 12:34:53.555281  992635 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:53.753014  992635 default_sa.go:45] found service account: "default"
	I0120 12:34:53.753053  992635 default_sa.go:55] duration metric: took 197.764358ms for default service account to be created ...
	I0120 12:34:53.753066  992635 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:53.953127  992635 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:55.685957  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:57.679747  993131 pod_ready.go:82] duration metric: took 4m0.000931966s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:57.679804  993131 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:57.679835  993131 pod_ready.go:39] duration metric: took 4m14.541139208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:57.679882  993131 kubeadm.go:597] duration metric: took 4m22.782450691s to restartPrimaryControlPlane
	W0120 12:34:57.679976  993131 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:57.680017  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:59.068750  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:59.085643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:59.085720  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:59.128466  993585 cri.go:89] found id: ""
	I0120 12:34:59.128566  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.128584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:59.128594  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:59.128677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:59.175838  993585 cri.go:89] found id: ""
	I0120 12:34:59.175873  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.175885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:59.175893  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:59.175961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:59.211334  993585 cri.go:89] found id: ""
	I0120 12:34:59.211371  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.211383  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:59.211392  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:59.211466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:59.248992  993585 cri.go:89] found id: ""
	I0120 12:34:59.249031  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.249043  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:59.249060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:59.249127  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:59.285229  993585 cri.go:89] found id: ""
	I0120 12:34:59.285266  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.285279  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:59.285288  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:59.285367  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:59.323049  993585 cri.go:89] found id: ""
	I0120 12:34:59.323081  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.323092  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:59.323099  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:59.323180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:59.365925  993585 cri.go:89] found id: ""
	I0120 12:34:59.365968  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.365978  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:59.365985  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:59.366060  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:59.406489  993585 cri.go:89] found id: ""
	I0120 12:34:59.406540  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.406553  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:59.406565  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:59.406579  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:59.477858  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:59.477896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:59.494617  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:59.494658  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:59.572132  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:59.572160  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:59.572178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:59.668424  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:59.668471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:02.212721  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:02.227926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:02.228019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:02.266386  993585 cri.go:89] found id: ""
	I0120 12:35:02.266431  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.266444  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:02.266454  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:02.266541  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:02.301567  993585 cri.go:89] found id: ""
	I0120 12:35:02.301595  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.301607  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:02.301615  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:02.301678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:02.338717  993585 cri.go:89] found id: ""
	I0120 12:35:02.338758  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.338770  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:02.338778  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:02.338847  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:02.373953  993585 cri.go:89] found id: ""
	I0120 12:35:02.373990  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.374004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:02.374014  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:02.374113  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:02.406791  993585 cri.go:89] found id: ""
	I0120 12:35:02.406828  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.406839  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:02.406845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:02.406897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:02.443578  993585 cri.go:89] found id: ""
	I0120 12:35:02.443609  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.443617  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:02.443626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:02.443676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:02.477334  993585 cri.go:89] found id: ""
	I0120 12:35:02.477374  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.477387  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:02.477395  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:02.477468  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:02.511320  993585 cri.go:89] found id: ""
	I0120 12:35:02.511347  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.511357  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:02.511368  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:02.511379  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:02.563616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:02.563655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:02.589388  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:02.589428  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:02.668649  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:02.668676  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:02.668690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:02.754754  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:02.754788  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:05.298701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:05.312912  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:05.312991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:05.345040  993585 cri.go:89] found id: ""
	I0120 12:35:05.345073  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.345082  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:05.345095  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:05.345166  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:05.378693  993585 cri.go:89] found id: ""
	I0120 12:35:05.378728  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.378739  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:05.378747  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:05.378802  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:05.411600  993585 cri.go:89] found id: ""
	I0120 12:35:05.411628  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.411636  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:05.411642  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:05.411693  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:05.444416  993585 cri.go:89] found id: ""
	I0120 12:35:05.444445  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.444453  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:05.444461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:05.444525  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:05.475125  993585 cri.go:89] found id: ""
	I0120 12:35:05.475158  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.475171  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:05.475177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:05.475246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:05.508163  993585 cri.go:89] found id: ""
	I0120 12:35:05.508194  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.508207  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:05.508215  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:05.508278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:05.543703  993585 cri.go:89] found id: ""
	I0120 12:35:05.543737  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.543745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:05.543751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:05.543819  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:05.579560  993585 cri.go:89] found id: ""
	I0120 12:35:05.579594  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.579606  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:05.579620  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:05.579634  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:05.632935  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:05.632986  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:05.645983  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:05.646012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:05.719551  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:05.719582  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:05.719599  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:05.799242  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:05.799283  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.344816  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:08.358927  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:08.359006  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:08.393237  993585 cri.go:89] found id: ""
	I0120 12:35:08.393265  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.393274  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:08.393280  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:08.393333  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:08.432032  993585 cri.go:89] found id: ""
	I0120 12:35:08.432061  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.432069  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:08.432077  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:08.432155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:08.465329  993585 cri.go:89] found id: ""
	I0120 12:35:08.465357  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.465366  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:08.465375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:08.465450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:08.498889  993585 cri.go:89] found id: ""
	I0120 12:35:08.498932  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.498944  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:08.498952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:08.499034  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:08.533799  993585 cri.go:89] found id: ""
	I0120 12:35:08.533827  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.533836  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:08.533842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:08.533898  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:08.569072  993585 cri.go:89] found id: ""
	I0120 12:35:08.569109  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.569121  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:08.569129  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:08.569190  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:08.602775  993585 cri.go:89] found id: ""
	I0120 12:35:08.602815  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.602828  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:08.602836  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:08.602899  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:08.637207  993585 cri.go:89] found id: ""
	I0120 12:35:08.637242  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.637253  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:08.637266  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:08.637281  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:08.650046  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:08.650077  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:08.717640  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:08.717668  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:08.717682  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:08.795565  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:08.795605  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.832910  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:08.832951  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.391198  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:11.404454  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:11.404548  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:11.438901  993585 cri.go:89] found id: ""
	I0120 12:35:11.438942  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.438951  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:11.438959  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:11.439028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:11.475199  993585 cri.go:89] found id: ""
	I0120 12:35:11.475228  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.475237  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:11.475243  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:11.475304  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:11.507984  993585 cri.go:89] found id: ""
	I0120 12:35:11.508029  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.508041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:11.508052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:11.508145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:11.544131  993585 cri.go:89] found id: ""
	I0120 12:35:11.544162  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.544170  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:11.544176  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:11.544229  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:11.585316  993585 cri.go:89] found id: ""
	I0120 12:35:11.585353  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.585364  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:11.585370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:11.585424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:11.621531  993585 cri.go:89] found id: ""
	I0120 12:35:11.621565  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.621578  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:11.621587  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:11.621644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:11.653882  993585 cri.go:89] found id: ""
	I0120 12:35:11.653915  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.653926  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:11.653935  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:11.654005  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:11.686715  993585 cri.go:89] found id: ""
	I0120 12:35:11.686751  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.686763  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:11.686777  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:11.686792  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:11.766495  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:11.766550  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:11.805907  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:11.805944  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.854399  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:11.854435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:11.867131  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:11.867168  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:11.930826  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.431154  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:14.444170  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:14.444252  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:14.478030  993585 cri.go:89] found id: ""
	I0120 12:35:14.478067  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.478077  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:14.478083  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:14.478148  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:14.510821  993585 cri.go:89] found id: ""
	I0120 12:35:14.510855  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.510867  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:14.510874  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:14.510942  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:14.543080  993585 cri.go:89] found id: ""
	I0120 12:35:14.543136  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.543149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:14.543157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:14.543214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:14.579258  993585 cri.go:89] found id: ""
	I0120 12:35:14.579293  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.579302  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:14.579308  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:14.579361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:14.617149  993585 cri.go:89] found id: ""
	I0120 12:35:14.617187  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.617198  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:14.617206  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:14.617278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:14.650716  993585 cri.go:89] found id: ""
	I0120 12:35:14.650754  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.650793  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:14.650803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:14.650874  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:14.685987  993585 cri.go:89] found id: ""
	I0120 12:35:14.686018  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.686026  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:14.686032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:14.686084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:14.736332  993585 cri.go:89] found id: ""
	I0120 12:35:14.736370  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.736378  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:14.736389  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:14.736406  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:14.789693  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:14.789734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:14.818344  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:14.818376  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:14.891944  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.891974  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:14.891990  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:14.969846  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:14.969888  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:17.512148  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:17.525055  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:17.525143  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:17.559502  993585 cri.go:89] found id: ""
	I0120 12:35:17.559539  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.559550  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:17.559563  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:17.559624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:17.596133  993585 cri.go:89] found id: ""
	I0120 12:35:17.596170  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.596182  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:17.596190  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:17.596258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:17.632458  993585 cri.go:89] found id: ""
	I0120 12:35:17.632511  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.632526  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:17.632535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:17.632614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:17.666860  993585 cri.go:89] found id: ""
	I0120 12:35:17.666891  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.666899  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:17.666905  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:17.666959  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:17.701282  993585 cri.go:89] found id: ""
	I0120 12:35:17.701309  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.701318  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:17.701325  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:17.701384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:17.733358  993585 cri.go:89] found id: ""
	I0120 12:35:17.733391  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.733399  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:17.733406  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:17.733460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:17.769630  993585 cri.go:89] found id: ""
	I0120 12:35:17.769661  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.769670  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:17.769677  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:17.769731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:17.801855  993585 cri.go:89] found id: ""
	I0120 12:35:17.801894  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.801906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:17.801920  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:17.801935  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:17.852827  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:17.852869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:17.866559  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:17.866589  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:17.937036  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:17.937058  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:17.937078  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:18.011449  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:18.011482  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.551859  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:20.564461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:20.564522  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:20.599674  993585 cri.go:89] found id: ""
	I0120 12:35:20.599700  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.599708  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:20.599713  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:20.599761  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:20.634303  993585 cri.go:89] found id: ""
	I0120 12:35:20.634330  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.634340  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:20.634347  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:20.634395  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:20.670501  993585 cri.go:89] found id: ""
	I0120 12:35:20.670552  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.670562  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:20.670568  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:20.670635  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:20.703603  993585 cri.go:89] found id: ""
	I0120 12:35:20.703627  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.703636  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:20.703644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:20.703699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:20.733456  993585 cri.go:89] found id: ""
	I0120 12:35:20.733490  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.733501  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:20.733509  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:20.733565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:20.764504  993585 cri.go:89] found id: ""
	I0120 12:35:20.764529  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.764539  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:20.764547  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:20.764608  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:20.796510  993585 cri.go:89] found id: ""
	I0120 12:35:20.796543  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.796553  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:20.796560  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:20.796623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:20.828114  993585 cri.go:89] found id: ""
	I0120 12:35:20.828147  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.828158  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:20.828170  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:20.828189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:20.889902  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:20.889933  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:20.889949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:20.962443  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:20.962471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.999767  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:20.999798  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:21.050810  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:21.050837  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.565446  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:23.577843  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:23.577912  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:23.612669  993585 cri.go:89] found id: ""
	I0120 12:35:23.612699  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.612710  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:23.612719  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:23.612787  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:23.646750  993585 cri.go:89] found id: ""
	I0120 12:35:23.646783  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.646793  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:23.646799  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:23.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:23.679879  993585 cri.go:89] found id: ""
	I0120 12:35:23.679907  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.679917  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:23.679925  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:23.679989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:23.713255  993585 cri.go:89] found id: ""
	I0120 12:35:23.713292  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.713301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:23.713307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:23.713358  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:23.742940  993585 cri.go:89] found id: ""
	I0120 12:35:23.742966  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.742974  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:23.742980  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:23.743029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:23.771816  993585 cri.go:89] found id: ""
	I0120 12:35:23.771846  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.771865  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:23.771871  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:23.771937  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:23.801508  993585 cri.go:89] found id: ""
	I0120 12:35:23.801536  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.801544  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:23.801549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:23.801606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:23.830867  993585 cri.go:89] found id: ""
	I0120 12:35:23.830897  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.830906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:23.830918  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:23.830934  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:23.882650  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:23.882678  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.895231  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:23.895260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:23.959418  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:23.959446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:23.959461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:24.036771  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:24.036802  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:26.577129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:26.594999  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:26.595084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:26.627078  993585 cri.go:89] found id: ""
	I0120 12:35:26.627114  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.627123  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:26.627129  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:26.627184  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:26.667285  993585 cri.go:89] found id: ""
	I0120 12:35:26.667317  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.667333  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:26.667340  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:26.667416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:26.704185  993585 cri.go:89] found id: ""
	I0120 12:35:26.704216  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.704227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:26.704235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:26.704296  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:26.738047  993585 cri.go:89] found id: ""
	I0120 12:35:26.738082  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.738108  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:26.738117  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:26.738183  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:26.768751  993585 cri.go:89] found id: ""
	I0120 12:35:26.768783  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.768794  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:26.768801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:26.768865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:26.799890  993585 cri.go:89] found id: ""
	I0120 12:35:26.799916  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.799924  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:26.799930  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:26.799980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:26.831879  993585 cri.go:89] found id: ""
	I0120 12:35:26.831910  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.831921  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:26.831929  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:26.831987  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:26.869231  993585 cri.go:89] found id: ""
	I0120 12:35:26.869264  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.869272  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:26.869282  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:26.869294  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:26.929958  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:26.929982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:26.929996  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:25.897831  993131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.217725548s)
	I0120 12:35:25.897928  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.911960  993131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:25.920888  993131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:25.929485  993131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:25.929507  993131 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:25.929555  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 12:35:25.937714  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:25.937770  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:25.946009  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 12:35:25.954472  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:25.954515  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:25.962622  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.970420  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:25.970466  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.978489  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 12:35:25.986579  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:25.986631  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:25.994935  993131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:26.145798  993131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:27.025154  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:27.025189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:27.073288  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:27.073333  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:27.124126  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:27.124156  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:29.638666  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:29.652209  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:29.652286  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:29.690747  993585 cri.go:89] found id: ""
	I0120 12:35:29.690777  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.690789  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:29.690796  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:29.690857  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:29.721866  993585 cri.go:89] found id: ""
	I0120 12:35:29.721896  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.721907  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:29.721915  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:29.721978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:29.757564  993585 cri.go:89] found id: ""
	I0120 12:35:29.757596  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.757628  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:29.757637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:29.757712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:29.790677  993585 cri.go:89] found id: ""
	I0120 12:35:29.790709  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.790720  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:29.790728  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:29.790791  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:29.826917  993585 cri.go:89] found id: ""
	I0120 12:35:29.826953  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.826965  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:29.826974  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:29.827039  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:29.861866  993585 cri.go:89] found id: ""
	I0120 12:35:29.861897  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.861908  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:29.861916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:29.861973  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:29.895508  993585 cri.go:89] found id: ""
	I0120 12:35:29.895543  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.895554  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:29.895563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:29.895623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:29.927907  993585 cri.go:89] found id: ""
	I0120 12:35:29.927939  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.927949  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:29.927961  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:29.927976  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:29.968111  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:29.968149  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:30.038475  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:30.038529  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:30.051650  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:30.051679  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:30.117850  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:30.117880  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:30.117896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:34.909127  993131 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:34.909216  993131 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:34.909344  993131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:34.909477  993131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:34.909620  993131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:34.909715  993131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:34.911105  993131 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:34.911202  993131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:34.911293  993131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.911398  993131 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:34.911468  993131 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:34.911533  993131 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:34.911590  993131 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:34.911674  993131 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:34.911735  993131 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:34.911828  993131 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:34.911943  993131 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:34.912009  993131 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:34.912100  993131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:34.912190  993131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:34.912286  993131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:34.912332  993131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:34.912438  993131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:34.912528  993131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:34.912635  993131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:34.912726  993131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:34.914123  993131 out.go:235]   - Booting up control plane ...
	I0120 12:35:34.914234  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:34.914348  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:34.914449  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:34.914608  993131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:34.914688  993131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:34.914725  993131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:34.914857  993131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:34.914944  993131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:34.915002  993131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.58459ms
	I0120 12:35:34.915062  993131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:34.915123  993131 kubeadm.go:310] [api-check] The API server is healthy after 5.503412907s
	I0120 12:35:34.915262  993131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:34.915400  993131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:34.915458  993131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:34.915633  993131 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-981597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:34.915681  993131 kubeadm.go:310] [bootstrap-token] Using token: i0tzs5.z567f1ntzr02cqfq
	I0120 12:35:34.916955  993131 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:34.917087  993131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:34.917200  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:34.917374  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:34.917519  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:34.917673  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:34.917794  993131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:34.917950  993131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:34.918013  993131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:34.918074  993131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:34.918083  993131 kubeadm.go:310] 
	I0120 12:35:34.918237  993131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:34.918260  993131 kubeadm.go:310] 
	I0120 12:35:34.918376  993131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:34.918388  993131 kubeadm.go:310] 
	I0120 12:35:34.918425  993131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:34.918506  993131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:34.918601  993131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:34.918613  993131 kubeadm.go:310] 
	I0120 12:35:34.918694  993131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:34.918704  993131 kubeadm.go:310] 
	I0120 12:35:34.918758  993131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:34.918770  993131 kubeadm.go:310] 
	I0120 12:35:34.918843  993131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:34.918947  993131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:34.919045  993131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:34.919057  993131 kubeadm.go:310] 
	I0120 12:35:34.919174  993131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:34.919281  993131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:34.919295  993131 kubeadm.go:310] 
	I0120 12:35:34.919404  993131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919548  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:35:34.919582  993131 kubeadm.go:310] 	--control-plane 
	I0120 12:35:34.919594  993131 kubeadm.go:310] 
	I0120 12:35:34.919711  993131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:34.919723  993131 kubeadm.go:310] 
	I0120 12:35:34.919827  993131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919982  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:35:34.919999  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:35:34.920015  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:35:34.921475  993131 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:35:32.712573  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:32.725809  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:32.725886  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:32.761768  993585 cri.go:89] found id: ""
	I0120 12:35:32.761803  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.761812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:32.761818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:32.761875  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:32.797578  993585 cri.go:89] found id: ""
	I0120 12:35:32.797610  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.797621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:32.797628  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:32.797694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:32.834493  993585 cri.go:89] found id: ""
	I0120 12:35:32.834539  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.834552  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:32.834559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:32.834644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:32.870730  993585 cri.go:89] found id: ""
	I0120 12:35:32.870762  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.870774  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:32.870782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:32.870851  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:32.913904  993585 cri.go:89] found id: ""
	I0120 12:35:32.913932  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.913943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:32.913951  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:32.914019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:32.955928  993585 cri.go:89] found id: ""
	I0120 12:35:32.955961  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.955972  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:32.955981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:32.956044  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:33.001075  993585 cri.go:89] found id: ""
	I0120 12:35:33.001116  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.001129  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:33.001138  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:33.001209  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:33.035918  993585 cri.go:89] found id: ""
	I0120 12:35:33.035954  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.035961  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:33.035971  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:33.035981  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:33.090782  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:33.090816  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:33.107144  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:33.107171  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:33.184808  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:33.184830  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:33.184845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:33.269131  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:33.269170  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:35.809619  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:35.822178  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:35.822254  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:35.862005  993585 cri.go:89] found id: ""
	I0120 12:35:35.862035  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.862042  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:35.862050  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:35.862110  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:35.896880  993585 cri.go:89] found id: ""
	I0120 12:35:35.896909  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.896920  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:35.896928  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:35.896995  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:35.931762  993585 cri.go:89] found id: ""
	I0120 12:35:35.931795  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.931806  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:35.931815  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:35.931882  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:35.965205  993585 cri.go:89] found id: ""
	I0120 12:35:35.965236  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.965246  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:35.965254  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:35.965310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:35.999903  993585 cri.go:89] found id: ""
	I0120 12:35:35.999926  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.999943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:35.999956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:36.000004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:36.033944  993585 cri.go:89] found id: ""
	I0120 12:35:36.033981  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.033992  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:36.034005  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:36.034073  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:36.066986  993585 cri.go:89] found id: ""
	I0120 12:35:36.067021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.067035  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:36.067043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:36.067108  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:36.096989  993585 cri.go:89] found id: ""
	I0120 12:35:36.097021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.097033  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:36.097047  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:36.097062  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:36.170812  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:36.170838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:36.208578  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:36.208619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:36.259448  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:36.259483  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:36.273938  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:36.273968  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:36.342621  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:34.922590  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:35:34.933756  993131 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:35:34.952622  993131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:34.952700  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:34.952763  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-981597 minikube.k8s.io/updated_at=2025_01_20T12_35_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=default-k8s-diff-port-981597 minikube.k8s.io/primary=true
	I0120 12:35:35.145316  993131 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:35.161459  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:35.662404  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.162367  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.662373  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.162163  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.661727  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.161998  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.662452  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.161911  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.336211  993131 kubeadm.go:1113] duration metric: took 4.383561407s to wait for elevateKubeSystemPrivileges
	I0120 12:35:39.336266  993131 kubeadm.go:394] duration metric: took 5m4.484253589s to StartCluster
	I0120 12:35:39.336293  993131 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.336426  993131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:35:39.338834  993131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.339088  993131 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:35:39.339220  993131 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:39.339332  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:35:39.339365  993131 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339391  993131 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-981597"
	I0120 12:35:39.339390  993131 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-981597"
	W0120 12:35:39.339401  993131 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:35:39.339408  993131 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339418  993131 addons.go:247] addon dashboard should already be in state true
	I0120 12:35:39.339411  993131 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339435  993131 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339444  993131 addons.go:247] addon metrics-server should already be in state true
	I0120 12:35:39.339444  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339451  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339474  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339390  993131 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339493  993131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-981597"
	I0120 12:35:39.339824  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339865  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339923  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340012  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.340084  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340125  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.343052  993131 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:39.344268  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:39.360766  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0120 12:35:39.360936  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0120 12:35:39.361027  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0120 12:35:39.361484  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361615  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361686  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361937  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.361959  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362058  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362066  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362167  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362178  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362512  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362592  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362613  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362835  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.363083  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.363147  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.363178  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0120 12:35:39.363870  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.364373  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.364508  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.364871  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.364893  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.365250  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.365757  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.365799  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.366758  993131 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.366781  993131 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:35:39.366816  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.367172  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.367210  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.385700  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0120 12:35:39.386220  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.386752  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.386776  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.387167  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.387430  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.388835  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0120 12:35:39.389074  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0120 12:35:39.389290  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389718  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389796  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.389819  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390265  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.390287  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390316  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.390346  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.390828  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.391044  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.391081  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.392517  993131 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:35:39.392556  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0120 12:35:39.393043  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.393711  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.393715  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.393730  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.394195  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.394747  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.394793  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.395249  993131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:39.395355  993131 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:35:39.395403  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.396870  993131 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.396892  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:39.396914  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.396998  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:35:39.397017  993131 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:35:39.397039  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.399496  993131 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:35:38.843738  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:38.856444  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:38.856506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:38.892000  993585 cri.go:89] found id: ""
	I0120 12:35:38.892027  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.892037  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:38.892043  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:38.892093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:38.930509  993585 cri.go:89] found id: ""
	I0120 12:35:38.930558  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.930569  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:38.930577  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:38.930643  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:38.976632  993585 cri.go:89] found id: ""
	I0120 12:35:38.976675  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.976687  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:38.976695  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:38.976763  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:39.021957  993585 cri.go:89] found id: ""
	I0120 12:35:39.021993  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.022004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:39.022011  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:39.022080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:39.060311  993585 cri.go:89] found id: ""
	I0120 12:35:39.060352  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.060366  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:39.060375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:39.060441  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:39.097901  993585 cri.go:89] found id: ""
	I0120 12:35:39.097939  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.097952  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:39.097961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:39.098029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:39.135291  993585 cri.go:89] found id: ""
	I0120 12:35:39.135328  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.135341  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:39.135349  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:39.135415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:39.178737  993585 cri.go:89] found id: ""
	I0120 12:35:39.178775  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.178810  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:39.178822  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:39.178838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:39.228677  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:39.228723  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:39.281237  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:39.281274  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:39.298505  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:39.298554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:39.387325  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:39.387350  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:39.387364  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:39.400927  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:35:39.400947  993131 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:35:39.400969  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.401577  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401584  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401591  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401608  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401620  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401641  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401644  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401851  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.401948  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.402022  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402053  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402154  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.402468  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.404077  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.406625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.406703  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.406720  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.410708  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.410899  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.411057  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.414646  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0120 12:35:39.415080  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.415539  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.415560  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.415922  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.416132  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.417677  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.417895  993131 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.417909  993131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:39.417927  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.422636  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422665  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.422682  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422694  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.424675  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.424843  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.424988  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.601008  993131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:39.644654  993131 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675702  993131 node_ready.go:49] node "default-k8s-diff-port-981597" has status "Ready":"True"
	I0120 12:35:39.675723  993131 node_ready.go:38] duration metric: took 31.032591ms for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675734  993131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:39.685490  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:39.768195  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:35:39.768218  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:35:39.812873  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:35:39.812897  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:35:39.822881  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.825928  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.846613  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:35:39.846645  993131 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:35:39.883996  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:35:39.884037  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:35:39.935435  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:39.935470  993131 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:35:39.992813  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:35:39.992840  993131 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:35:40.026214  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:40.069154  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:35:40.069190  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:35:40.121948  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:35:40.121983  993131 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:35:40.243520  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:35:40.243553  993131 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:35:40.252481  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252512  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.252849  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.252872  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.252885  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252900  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.253335  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.253397  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.253372  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.257887  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.257903  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.258196  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.258214  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.295226  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:35:40.295255  993131 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:35:40.386270  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:35:40.386304  993131 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:35:40.478877  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.478909  993131 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:35:40.533601  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.863384  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.037420526s)
	I0120 12:35:40.863438  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863447  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.863790  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.863831  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.863841  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.863851  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863864  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.864124  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.864145  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.864150  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.207665  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181404643s)
	I0120 12:35:41.207727  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.207743  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208079  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208098  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208117  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.208126  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208422  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208445  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208445  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.208456  993131 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-981597"
	I0120 12:35:41.719786  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:41.719813  993131 pod_ready.go:82] duration metric: took 2.034287913s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.719823  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.984277  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.450618233s)
	I0120 12:35:41.984341  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984368  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984689  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.984706  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.984718  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984728  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984738  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985071  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985119  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.985138  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.986711  993131 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-981597 addons enable metrics-server
	
	I0120 12:35:41.988326  993131 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:35:41.989523  993131 addons.go:514] duration metric: took 2.650315965s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:35:43.726169  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:41.981886  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:41.996139  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:41.996203  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:42.028240  993585 cri.go:89] found id: ""
	I0120 12:35:42.028267  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.028279  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:42.028287  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:42.028351  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:42.063513  993585 cri.go:89] found id: ""
	I0120 12:35:42.063544  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.063553  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:42.063561  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:42.063622  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:42.095602  993585 cri.go:89] found id: ""
	I0120 12:35:42.095637  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.095648  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:42.095656  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:42.095712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:42.128427  993585 cri.go:89] found id: ""
	I0120 12:35:42.128460  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.128471  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:42.128478  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:42.128539  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:42.163430  993585 cri.go:89] found id: ""
	I0120 12:35:42.163462  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.163473  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:42.163487  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:42.163601  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:42.212225  993585 cri.go:89] found id: ""
	I0120 12:35:42.212251  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.212259  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:42.212265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:42.212326  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:42.251596  993585 cri.go:89] found id: ""
	I0120 12:35:42.251623  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.251631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:42.251637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:42.251697  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:42.288436  993585 cri.go:89] found id: ""
	I0120 12:35:42.288472  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.288485  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:42.288498  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:42.288514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:42.351809  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:42.351858  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:42.367697  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:42.367740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:42.445420  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:42.445452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:42.445470  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:42.529150  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:42.529190  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:45.068423  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:45.083648  993585 kubeadm.go:597] duration metric: took 4m4.248047549s to restartPrimaryControlPlane
	W0120 12:35:45.083733  993585 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:35:45.083773  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:35:48.615167  993585 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.531361181s)
	I0120 12:35:48.615262  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:48.629340  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:48.640853  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:48.653161  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:48.653181  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:48.653220  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:48.662422  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:48.662489  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:48.672006  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:48.681430  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:48.681493  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:48.690703  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.699479  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:48.699551  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.708576  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:48.717379  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:48.717440  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:48.727690  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:48.809089  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:35:48.809181  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:48.968180  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:48.968344  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:48.968503  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:35:49.164019  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:45.813799  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.227053  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.729367  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.729409  993131 pod_ready.go:82] duration metric: took 7.009577783s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.729423  993131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735596  993131 pod_ready.go:93] pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.735621  993131 pod_ready.go:82] duration metric: took 6.188248ms for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735635  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748236  993131 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.748262  993131 pod_ready.go:82] duration metric: took 12.618834ms for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748275  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758672  993131 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.758703  993131 pod_ready.go:82] duration metric: took 10.418952ms for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758717  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766403  993131 pod_ready.go:93] pod "kube-proxy-sn66t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.766423  993131 pod_ready.go:82] duration metric: took 7.698237ms for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766433  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124688  993131 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:49.124714  993131 pod_ready.go:82] duration metric: took 358.274237ms for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124723  993131 pod_ready.go:39] duration metric: took 9.44898025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:49.124740  993131 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:49.124803  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:49.172406  993131 api_server.go:72] duration metric: took 9.833266884s to wait for apiserver process to appear ...
	I0120 12:35:49.172434  993131 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:49.172459  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:35:49.177280  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0120 12:35:49.178469  993131 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:49.178498  993131 api_server.go:131] duration metric: took 6.05652ms to wait for apiserver health ...
	I0120 12:35:49.178508  993131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:49.166637  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:49.166743  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:49.166851  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:49.166969  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:49.167055  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:49.167163  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:49.167247  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:49.167333  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:49.167596  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:49.167953  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:49.168592  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:49.168717  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:49.168824  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:49.305660  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:49.652487  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:49.782615  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:49.921695  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:49.937706  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:49.939001  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:49.939074  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:50.070984  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:50.072848  993585 out.go:235]   - Booting up control plane ...
	I0120 12:35:50.072980  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:50.082351  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:50.082939  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:50.083932  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:50.088842  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:35:49.328775  993131 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:49.328811  993131 system_pods.go:61] "coredns-668d6bf9bc-cn8tc" [19a18120-8f3f-45bd-92f3-c291423f4895] Running
	I0120 12:35:49.328819  993131 system_pods.go:61] "coredns-668d6bf9bc-g9m4p" [9e3e4568-92ab-4ee5-b10a-5489b72248d6] Running
	I0120 12:35:49.328825  993131 system_pods.go:61] "etcd-default-k8s-diff-port-981597" [82f73dcc-1328-428e-8eb7-550c9b2d2b22] Running
	I0120 12:35:49.328831  993131 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-981597" [ff2d67bb-7ff8-44ac-a043-b6f423339fc7] Running
	I0120 12:35:49.328837  993131 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-981597" [fa91d7b8-200d-464f-b2b0-3a08a4f435d8] Running
	I0120 12:35:49.328842  993131 system_pods.go:61] "kube-proxy-sn66t" [a90855a0-c87a-4b55-bd0e-4b95b062479d] Running
	I0120 12:35:49.328847  993131 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-981597" [26bb9f8b-4e05-4cb9-a863-75d6a6a6b652] Running
	I0120 12:35:49.328856  993131 system_pods.go:61] "metrics-server-f79f97bbb-xkrxx" [cf78f231-b1e0-4566-817b-bfb9b8dac3f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:35:49.328862  993131 system_pods.go:61] "storage-provisioner" [e77b12e8-25f3-43ad-8588-2716dd4ccbd1] Running
	I0120 12:35:49.328876  993131 system_pods.go:74] duration metric: took 150.359796ms to wait for pod list to return data ...
	I0120 12:35:49.328889  993131 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:49.619916  993131 default_sa.go:45] found service account: "default"
	I0120 12:35:49.619954  993131 default_sa.go:55] duration metric: took 291.056324ms for default service account to be created ...
	I0120 12:35:49.619967  993131 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:49.728886  993131 system_pods.go:87] 9 kube-system pods found
	I0120 12:36:30.091045  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:36:30.091553  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:30.091777  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:35.092197  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:35.092442  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:45.093033  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:45.093302  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:05.094270  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:05.094487  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096146  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:45.096378  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096414  993585 kubeadm.go:310] 
	I0120 12:37:45.096477  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:37:45.096535  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:37:45.096547  993585 kubeadm.go:310] 
	I0120 12:37:45.096623  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:37:45.096688  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:37:45.096836  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:37:45.096847  993585 kubeadm.go:310] 
	I0120 12:37:45.096982  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:37:45.097022  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:37:45.097075  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:37:45.097088  993585 kubeadm.go:310] 
	I0120 12:37:45.097213  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:37:45.097323  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:37:45.097344  993585 kubeadm.go:310] 
	I0120 12:37:45.097440  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:37:45.097575  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:37:45.097684  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:37:45.097786  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:37:45.097798  993585 kubeadm.go:310] 
	I0120 12:37:45.098707  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:37:45.098836  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:37:45.098939  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:37:45.099133  993585 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:37:45.099186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:37:45.553353  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:37:45.568252  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:37:45.577030  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:37:45.577047  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:37:45.577084  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:37:45.585663  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:37:45.585715  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:37:45.594051  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:37:45.602109  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:37:45.602159  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:37:45.610431  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.619241  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:37:45.619279  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.627467  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:37:45.636457  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:37:45.636508  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:37:45.644627  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:37:45.711254  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:37:45.711363  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:37:45.852391  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:37:45.852543  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:37:45.852693  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:37:46.034483  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:37:46.036223  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:37:46.036346  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:37:46.036455  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:37:46.036570  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:37:46.036663  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:37:46.036789  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:37:46.036889  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:37:46.037251  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:37:46.037740  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:37:46.038025  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:37:46.038414  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:37:46.038478  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:37:46.038581  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:37:46.266444  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:37:46.393858  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:37:46.536948  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:37:46.765338  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:37:46.783975  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:37:46.785028  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:37:46.785076  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:37:46.920894  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:37:46.922757  993585 out.go:235]   - Booting up control plane ...
	I0120 12:37:46.922892  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:37:46.929056  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:37:46.933400  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:37:46.933527  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:37:46.939663  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:38:26.942147  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:38:26.942793  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:26.943016  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:31.943340  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:31.943563  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:41.944064  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:41.944316  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:01.944375  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:01.944608  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943032  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:41.943264  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943273  993585 kubeadm.go:310] 
	I0120 12:39:41.943326  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:39:41.943363  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:39:41.943383  993585 kubeadm.go:310] 
	I0120 12:39:41.943444  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:39:41.943506  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:39:41.943609  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:39:41.943617  993585 kubeadm.go:310] 
	I0120 12:39:41.943716  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:39:41.943762  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:39:41.943814  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:39:41.943826  993585 kubeadm.go:310] 
	I0120 12:39:41.943914  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:39:41.944033  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:39:41.944052  993585 kubeadm.go:310] 
	I0120 12:39:41.944219  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:39:41.944348  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:39:41.944450  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:39:41.944591  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:39:41.944613  993585 kubeadm.go:310] 
	I0120 12:39:41.945529  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:39:41.945621  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:39:41.945690  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:39:41.945758  993585 kubeadm.go:394] duration metric: took 8m1.157734369s to StartCluster
	I0120 12:39:41.945816  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:39:41.945871  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:39:41.989147  993585 cri.go:89] found id: ""
	I0120 12:39:41.989175  993585 logs.go:282] 0 containers: []
	W0120 12:39:41.989183  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:39:41.989188  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:39:41.989251  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:39:42.021608  993585 cri.go:89] found id: ""
	I0120 12:39:42.021631  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.021639  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:39:42.021646  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:39:42.021706  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:39:42.062565  993585 cri.go:89] found id: ""
	I0120 12:39:42.062592  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.062601  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:39:42.062607  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:39:42.062659  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:39:42.097040  993585 cri.go:89] found id: ""
	I0120 12:39:42.097067  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.097075  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:39:42.097081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:39:42.097144  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:39:42.128833  993585 cri.go:89] found id: ""
	I0120 12:39:42.128862  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.128873  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:39:42.128880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:39:42.128936  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:39:42.159564  993585 cri.go:89] found id: ""
	I0120 12:39:42.159596  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.159608  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:39:42.159616  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:39:42.159676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:39:42.189336  993585 cri.go:89] found id: ""
	I0120 12:39:42.189367  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.189378  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:39:42.189386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:39:42.189450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:39:42.228745  993585 cri.go:89] found id: ""
	I0120 12:39:42.228776  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.228787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:39:42.228801  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:39:42.228818  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:39:42.244466  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:39:42.244508  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:39:42.336809  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:39:42.336832  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:39:42.336844  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:39:42.443413  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:39:42.443445  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:39:42.481436  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:39:42.481466  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:39:42.533396  993585 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:39:42.533472  993585 out.go:270] * 
	W0120 12:39:42.533585  993585 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.533610  993585 out.go:270] * 
	W0120 12:39:42.534617  993585 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:39:42.537661  993585 out.go:201] 
	W0120 12:39:42.538809  993585 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.538865  993585 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:39:42.538897  993585 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:39:42.540269  993585 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.186886285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377326186860515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e50c3005-28a3-483e-a4e8-9eebd3bacb3f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.187329828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93417af9-0b82-4a7b-8289-3be5db8d4018 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.187371856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93417af9-0b82-4a7b-8289-3be5db8d4018 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.187409017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=93417af9-0b82-4a7b-8289-3be5db8d4018 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.222267883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17bfa3c9-36ca-4cd2-9551-17393dd8e320 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.222374608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17bfa3c9-36ca-4cd2-9551-17393dd8e320 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.223549632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b84ef454-61e8-4cae-a327-b432d06123f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.223956256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377326223934725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b84ef454-61e8-4cae-a327-b432d06123f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.224445821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a49457c-d6bf-4a6e-8eef-6aa9605fc9e2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.224508904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a49457c-d6bf-4a6e-8eef-6aa9605fc9e2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.224538639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3a49457c-d6bf-4a6e-8eef-6aa9605fc9e2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.256182820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9da92e70-1b8a-4cf3-9412-032d11699c36 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.256297326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9da92e70-1b8a-4cf3-9412-032d11699c36 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.257645834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=578935c2-04ab-4a49-a4c7-0caea775dec1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.258038283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377326258010812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=578935c2-04ab-4a49-a4c7-0caea775dec1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.258483600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60d81846-520d-4ff2-b45d-ee9b9b74e357 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.258528859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60d81846-520d-4ff2-b45d-ee9b9b74e357 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.258570733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60d81846-520d-4ff2-b45d-ee9b9b74e357 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.287397451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63904158-d81c-445d-893a-f72f8bfa0680 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.287468407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63904158-d81c-445d-893a-f72f8bfa0680 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.288327502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=903c4808-6735-4130-b138-7dad98128e8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.288881975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377326288852057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=903c4808-6735-4130-b138-7dad98128e8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.289496881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3d4ce7c-6a58-47b1-9dfe-e6f9fac08637 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.289591002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3d4ce7c-6a58-47b1-9dfe-e6f9fac08637 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:48:46 old-k8s-version-134433 crio[630]: time="2025-01-20 12:48:46.289626367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3d4ce7c-6a58-47b1-9dfe-e6f9fac08637 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043464] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.939919] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.154572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498654] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.775976] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.069639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050163] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.195196] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.136181] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.241855] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.257251] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068017] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.557848] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.735598] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 12:35] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan20 12:37] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +0.069529] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:48:46 up 17 min,  0 users,  load average: 0.03, 0.03, 0.04
	Linux old-k8s-version-134433 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001100c0, 0xc000cb1b90)
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: goroutine 163 [select]:
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000befef0, 0x4f0ac20, 0xc0003ef8b0, 0x1, 0xc0001100c0)
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c6e620, 0xc0001100c0)
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000391050, 0xc000262d20)
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6559]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 20 12:48:42 old-k8s-version-134433 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 12:48:42 old-k8s-version-134433 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 12:48:42 old-k8s-version-134433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 20 12:48:42 old-k8s-version-134433 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 12:48:42 old-k8s-version-134433 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6569]: I0120 12:48:42.787389    6569 server.go:416] Version: v1.20.0
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6569]: I0120 12:48:42.787643    6569 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6569]: I0120 12:48:42.789656    6569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6569]: W0120 12:48:42.790604    6569 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 12:48:42 old-k8s-version-134433 kubelet[6569]: I0120 12:48:42.790949    6569 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (228.724369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-134433" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (372.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:49:37.399880  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:52:40.485320  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:52:41.308485  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
E0120 12:54:37.399657  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.250:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.250:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (244.25188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-134433" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-134433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-134433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.58µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-134433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (238.199256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-134433 logs -n 25: (1.071891762s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-049625                           | kubernetes-upgrade-049625    | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-496524             | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673364                              | cert-expiration-673364       | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-969801 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | disable-driver-mounts-969801                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:28 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-987349            | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-496524                  | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-496524                                   | no-preload-496524            | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981597  | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:28 UTC | 20 Jan 25 12:30 UTC |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-987349                 | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC | 20 Jan 25 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-987349                                  | embed-certs-987349           | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-134433        | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981597       | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC | 20 Jan 25 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981597 | jenkins | v1.35.0 | 20 Jan 25 12:30 UTC |                     |
	|         | default-k8s-diff-port-981597                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-134433             | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC | 20 Jan 25 12:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-134433                              | old-k8s-version-134433       | jenkins | v1.35.0 | 20 Jan 25 12:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:31:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:31:11.956010  993585 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:31:11.956137  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956148  993585 out.go:358] Setting ErrFile to fd 2...
	I0120 12:31:11.956152  993585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:31:11.956366  993585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:31:11.956993  993585 out.go:352] Setting JSON to false
	I0120 12:31:11.958067  993585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18815,"bootTime":1737357457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:31:11.958186  993585 start.go:139] virtualization: kvm guest
	I0120 12:31:11.960398  993585 out.go:177] * [old-k8s-version-134433] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:31:11.961613  993585 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:31:11.961713  993585 notify.go:220] Checking for updates...
	I0120 12:31:11.964011  993585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:31:11.965092  993585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:11.966144  993585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:31:11.967208  993585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:31:11.968350  993585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:31:11.969863  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:11.970277  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.970346  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:11.985419  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0120 12:31:11.985879  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:11.986551  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:11.986596  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:11.986957  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:11.987146  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:11.988784  993585 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:31:11.989825  993585 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:31:11.990150  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:11.990189  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.004831  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0120 12:31:12.005226  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.005709  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.005734  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.006077  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.006313  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.043016  993585 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:31:12.044104  993585 start.go:297] selected driver: kvm2
	I0120 12:31:12.044121  993585 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
34433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.044209  993585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:31:12.044916  993585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.045000  993585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:31:12.060200  993585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:31:12.060534  993585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:31:12.060567  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:12.060601  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:12.060657  993585 start.go:340] cluster config:
	{Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:12.060783  993585 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:31:12.062963  993585 out.go:177] * Starting "old-k8s-version-134433" primary control-plane node in "old-k8s-version-134433" cluster
	I0120 12:31:12.064143  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:12.064184  993585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:31:12.064195  993585 cache.go:56] Caching tarball of preloaded images
	I0120 12:31:12.064275  993585 preload.go:172] Found /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:31:12.064287  993585 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 12:31:12.064378  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:12.064565  993585 start.go:360] acquireMachinesLock for old-k8s-version-134433: {Name:mkd5527ce9753efd08511b23d71dbb6bbf416f1b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:31:12.064608  993585 start.go:364] duration metric: took 25.197µs to acquireMachinesLock for "old-k8s-version-134433"
	I0120 12:31:12.064624  993585 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:31:12.064632  993585 fix.go:54] fixHost starting: 
	I0120 12:31:12.064897  993585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:31:12.064947  993585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:31:12.079979  993585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0120 12:31:12.080385  993585 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:31:12.080944  993585 main.go:141] libmachine: Using API Version  1
	I0120 12:31:12.080969  993585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:31:12.081279  993585 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:31:12.081512  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:12.081673  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetState
	I0120 12:31:12.083222  993585 fix.go:112] recreateIfNeeded on old-k8s-version-134433: state=Stopped err=<nil>
	I0120 12:31:12.083247  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	W0120 12:31:12.083395  993585 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:31:12.084950  993585 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-134433" ...
	I0120 12:31:07.641120  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.142764  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.684376  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.684889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:11.967640  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:13.968387  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.086040  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .Start
	I0120 12:31:12.086250  993585 main.go:141] libmachine: (old-k8s-version-134433) starting domain...
	I0120 12:31:12.086274  993585 main.go:141] libmachine: (old-k8s-version-134433) ensuring networks are active...
	I0120 12:31:12.087116  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network default is active
	I0120 12:31:12.087507  993585 main.go:141] libmachine: (old-k8s-version-134433) Ensuring network mk-old-k8s-version-134433 is active
	I0120 12:31:12.087972  993585 main.go:141] libmachine: (old-k8s-version-134433) getting domain XML...
	I0120 12:31:12.088701  993585 main.go:141] libmachine: (old-k8s-version-134433) creating domain...
	I0120 12:31:13.353235  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for IP...
	I0120 12:31:13.354008  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.354424  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.354568  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.354436  993621 retry.go:31] will retry after 195.738853ms: waiting for domain to come up
	I0120 12:31:13.551979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.552485  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.552546  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.552470  993621 retry.go:31] will retry after 286.807934ms: waiting for domain to come up
	I0120 12:31:13.841028  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:13.841561  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:13.841601  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:13.841522  993621 retry.go:31] will retry after 438.177816ms: waiting for domain to come up
	I0120 12:31:14.280867  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.281254  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.281287  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.281212  993621 retry.go:31] will retry after 401.413585ms: waiting for domain to come up
	I0120 12:31:14.684677  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:14.685256  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:14.685288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:14.685176  993621 retry.go:31] will retry after 625.770313ms: waiting for domain to come up
	I0120 12:31:15.312721  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:15.313245  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:15.313281  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:15.313210  993621 retry.go:31] will retry after 842.789855ms: waiting for domain to come up
	I0120 12:31:16.157329  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:16.157939  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:16.157970  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:16.157917  993621 retry.go:31] will retry after 997.649049ms: waiting for domain to come up
	I0120 12:31:12.642593  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:15.141471  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.141620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:14.686169  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.184821  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:16.467025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:18.966945  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.157668  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:17.158288  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:17.158346  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:17.158266  993621 retry.go:31] will retry after 1.3317802s: waiting for domain to come up
	I0120 12:31:18.491767  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:18.492314  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:18.492345  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:18.492274  993621 retry.go:31] will retry after 1.684115629s: waiting for domain to come up
	I0120 12:31:20.177742  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:20.178312  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:20.178344  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:20.178272  993621 retry.go:31] will retry after 2.098717757s: waiting for domain to come up
	I0120 12:31:19.141727  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.142012  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.684947  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:21.686415  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:24.185262  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:20.969393  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:23.466563  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.468388  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:22.279263  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:22.279782  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:22.279815  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:22.279747  993621 retry.go:31] will retry after 2.908067158s: waiting for domain to come up
	I0120 12:31:25.191591  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:25.192058  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:25.192082  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:25.192027  993621 retry.go:31] will retry after 2.860704715s: waiting for domain to come up
	I0120 12:31:23.142601  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:25.641748  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:26.685300  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:29.186578  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:27.967731  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.467076  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:28.053824  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:28.054209  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | unable to find current IP address of domain old-k8s-version-134433 in network mk-old-k8s-version-134433
	I0120 12:31:28.054237  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | I0120 12:31:28.054168  993621 retry.go:31] will retry after 3.593877393s: waiting for domain to come up
	I0120 12:31:31.651977  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652456  993585 main.go:141] libmachine: (old-k8s-version-134433) found domain IP: 192.168.50.250
	I0120 12:31:31.652477  993585 main.go:141] libmachine: (old-k8s-version-134433) reserving static IP address...
	I0120 12:31:31.652499  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has current primary IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.652880  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.652910  993585 main.go:141] libmachine: (old-k8s-version-134433) reserved static IP address 192.168.50.250 for domain old-k8s-version-134433
	I0120 12:31:31.652928  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | skip adding static IP to network mk-old-k8s-version-134433 - found existing host DHCP lease matching {name: "old-k8s-version-134433", mac: "52:54:00:4a:b6:e2", ip: "192.168.50.250"}
	I0120 12:31:31.652949  993585 main.go:141] libmachine: (old-k8s-version-134433) waiting for SSH...
	I0120 12:31:31.652979  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Getting to WaitForSSH function...
	I0120 12:31:31.655045  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655323  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.655341  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.655472  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH client type: external
	I0120 12:31:31.655509  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa (-rw-------)
	I0120 12:31:31.655555  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:31:31.655574  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | About to run SSH command:
	I0120 12:31:31.655599  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | exit 0
	I0120 12:31:31.778333  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | SSH cmd err, output: <nil>: 
	I0120 12:31:31.778766  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetConfigRaw
	I0120 12:31:31.779451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:31.782111  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782481  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.782538  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.782728  993585 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/config.json ...
	I0120 12:31:31.782983  993585 machine.go:93] provisionDockerMachine start ...
	I0120 12:31:31.783008  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:31.783221  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.785482  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785771  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.785804  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.785958  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.786153  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786352  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.786496  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.786666  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.786905  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.786918  993585 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:31:31.886822  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 12:31:31.886860  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887127  993585 buildroot.go:166] provisioning hostname "old-k8s-version-134433"
	I0120 12:31:31.887156  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:31.887366  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:31.890506  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.890962  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:31.891053  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:31.891155  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:31.891355  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891522  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:31.891722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:31.891900  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:31.892067  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:31.892078  993585 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-134433 && echo "old-k8s-version-134433" | sudo tee /etc/hostname
	I0120 12:31:27.642107  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:30.141452  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.142854  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.007463  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-134433
	
	I0120 12:31:32.007490  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.010730  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011157  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.011184  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.011407  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.011597  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011774  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.011883  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.012032  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.012246  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.012275  993585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-134433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-134433/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-134433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:31:32.122811  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:31:32.122845  993585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-942401/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-942401/.minikube}
	I0120 12:31:32.122865  993585 buildroot.go:174] setting up certificates
	I0120 12:31:32.122875  993585 provision.go:84] configureAuth start
	I0120 12:31:32.122884  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetMachineName
	I0120 12:31:32.123125  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.125986  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126423  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.126446  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.126677  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.128626  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129281  993585 provision.go:143] copyHostCerts
	I0120 12:31:32.129354  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem, removing ...
	I0120 12:31:32.129380  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem
	I0120 12:31:32.129382  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.129411  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.129470  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/key.pem (1675 bytes)
	I0120 12:31:32.129581  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem, removing ...
	I0120 12:31:32.129592  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem
	I0120 12:31:32.129634  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/ca.pem (1078 bytes)
	I0120 12:31:32.129702  993585 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem, removing ...
	I0120 12:31:32.129712  993585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem
	I0120 12:31:32.129741  993585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-942401/.minikube/cert.pem (1123 bytes)
	I0120 12:31:32.129806  993585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-134433 san=[127.0.0.1 192.168.50.250 localhost minikube old-k8s-version-134433]
	I0120 12:31:32.226358  993585 provision.go:177] copyRemoteCerts
	I0120 12:31:32.226410  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:31:32.226432  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.228814  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229133  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.229168  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.229333  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.229548  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.229722  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.229881  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.315787  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 12:31:32.341389  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:31:32.364095  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:31:32.386543  993585 provision.go:87] duration metric: took 263.65519ms to configureAuth
	I0120 12:31:32.386572  993585 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:31:32.386750  993585 config.go:182] Loaded profile config "old-k8s-version-134433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:31:32.386844  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.389737  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390222  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.390257  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.390478  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.390683  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.390858  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.391063  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.391234  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.391417  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.391438  993585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:31:32.617034  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:31:32.617072  993585 machine.go:96] duration metric: took 834.071068ms to provisionDockerMachine
	I0120 12:31:32.617085  993585 start.go:293] postStartSetup for "old-k8s-version-134433" (driver="kvm2")
	I0120 12:31:32.617096  993585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:31:32.617121  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.617506  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:31:32.617547  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.620838  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621275  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.621310  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.621640  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.621865  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.622064  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.622248  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.703904  993585 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:31:32.707878  993585 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:31:32.707902  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/addons for local assets ...
	I0120 12:31:32.707970  993585 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-942401/.minikube/files for local assets ...
	I0120 12:31:32.708078  993585 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem -> 9496562.pem in /etc/ssl/certs
	I0120 12:31:32.708218  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:31:32.716746  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:32.739636  993585 start.go:296] duration metric: took 122.539492ms for postStartSetup
	I0120 12:31:32.739674  993585 fix.go:56] duration metric: took 20.675041615s for fixHost
	I0120 12:31:32.739700  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.742857  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743259  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.743291  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.743451  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.743616  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743807  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.743953  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.744112  993585 main.go:141] libmachine: Using SSH client type: native
	I0120 12:31:32.744267  993585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.250 22 <nil> <nil>}
	I0120 12:31:32.744277  993585 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:31:32.850613  993585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376292.825194263
	
	I0120 12:31:32.850655  993585 fix.go:216] guest clock: 1737376292.825194263
	I0120 12:31:32.850667  993585 fix.go:229] Guest: 2025-01-20 12:31:32.825194263 +0000 UTC Remote: 2025-01-20 12:31:32.739679914 +0000 UTC m=+20.823511960 (delta=85.514349ms)
	I0120 12:31:32.850692  993585 fix.go:200] guest clock delta is within tolerance: 85.514349ms
	I0120 12:31:32.850697  993585 start.go:83] releasing machines lock for "old-k8s-version-134433", held for 20.786078788s
	I0120 12:31:32.850723  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.850994  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:32.853508  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.853864  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.853895  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.854081  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854574  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854785  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .DriverName
	I0120 12:31:32.854878  993585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:31:32.854915  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.855040  993585 ssh_runner.go:195] Run: cat /version.json
	I0120 12:31:32.855073  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHHostname
	I0120 12:31:32.857825  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858071  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858242  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858273  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858472  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858613  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:32.858642  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:32.858678  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.858803  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHPort
	I0120 12:31:32.858907  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.858970  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHKeyPath
	I0120 12:31:32.859042  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.859089  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetSSHUsername
	I0120 12:31:32.859218  993585 sshutil.go:53] new ssh client: &{IP:192.168.50.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/old-k8s-version-134433/id_rsa Username:docker}
	I0120 12:31:32.963636  993585 ssh_runner.go:195] Run: systemctl --version
	I0120 12:31:32.969637  993585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:31:33.109368  993585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:31:33.116476  993585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:31:33.116551  993585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:31:33.132563  993585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:31:33.132586  993585 start.go:495] detecting cgroup driver to use...
	I0120 12:31:33.132666  993585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:31:33.149598  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:31:33.163579  993585 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:31:33.163644  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:31:33.176714  993585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:31:33.190002  993585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:31:33.317215  993585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:31:33.474712  993585 docker.go:233] disabling docker service ...
	I0120 12:31:33.474786  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:31:33.487733  993585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:31:33.500315  993585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:31:33.629138  993585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:31:33.765704  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:31:33.780662  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:31:33.799085  993585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 12:31:33.799155  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.808607  993585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:31:33.808659  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.818065  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.827515  993585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:31:33.837226  993585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:31:33.846616  993585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:31:33.855024  993585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:31:33.855077  993585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:31:33.867670  993585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:31:33.876402  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:34.006664  993585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:31:34.098750  993585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:31:34.098834  993585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:31:34.103642  993585 start.go:563] Will wait 60s for crictl version
	I0120 12:31:34.103699  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:34.107125  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:31:34.144190  993585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:31:34.144288  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.172817  993585 ssh_runner.go:195] Run: crio --version
	I0120 12:31:34.203224  993585 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 12:31:31.684648  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.685881  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:32.467705  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.470006  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:34.204485  993585 main.go:141] libmachine: (old-k8s-version-134433) Calling .GetIP
	I0120 12:31:34.207458  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.207876  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:b6:e2", ip: ""} in network mk-old-k8s-version-134433: {Iface:virbr2 ExpiryTime:2025-01-20 13:31:23 +0000 UTC Type:0 Mac:52:54:00:4a:b6:e2 Iaid: IPaddr:192.168.50.250 Prefix:24 Hostname:old-k8s-version-134433 Clientid:01:52:54:00:4a:b6:e2}
	I0120 12:31:34.207904  993585 main.go:141] libmachine: (old-k8s-version-134433) DBG | domain old-k8s-version-134433 has defined IP address 192.168.50.250 and MAC address 52:54:00:4a:b6:e2 in network mk-old-k8s-version-134433
	I0120 12:31:34.208137  993585 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 12:31:34.211891  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:34.223705  993585 kubeadm.go:883] updating cluster {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:31:34.223826  993585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:31:34.223864  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:34.268289  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:34.268365  993585 ssh_runner.go:195] Run: which lz4
	I0120 12:31:34.272014  993585 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:31:34.275957  993585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:31:34.275987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 12:31:35.756157  993585 crio.go:462] duration metric: took 1.484200004s to copy over tarball
	I0120 12:31:35.756230  993585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:31:34.642634  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:37.142882  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:35.687588  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.185847  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:36.967824  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.968146  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.594323  993585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838057752s)
	I0120 12:31:38.594429  993585 crio.go:469] duration metric: took 2.838184511s to extract the tarball
	I0120 12:31:38.594454  993585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:31:38.636288  993585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:31:38.673987  993585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 12:31:38.674016  993585 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 12:31:38.674097  993585 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.674135  993585 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 12:31:38.674145  993585 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.674178  993585 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.674112  993585 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.674208  993585 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.674120  993585 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.674479  993585 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675856  993585 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:38.675888  993585 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.675857  993585 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.675858  993585 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.675860  993585 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.675864  993585 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.891668  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 12:31:38.898693  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:38.901324  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:38.903830  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:38.907827  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:38.909691  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 12:31:38.911977  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:38.988279  993585 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 12:31:38.988332  993585 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 12:31:38.988388  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.039162  993585 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 12:31:39.039204  993585 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.039255  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.070879  993585 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 12:31:39.070922  993585 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.070974  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078869  993585 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 12:31:39.078897  993585 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 12:31:39.078910  993585 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.078930  993585 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.078948  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078955  993585 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 12:31:39.078982  993585 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.078982  993585 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 12:31:39.079004  993585 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.079014  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.078986  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079039  993585 ssh_runner.go:195] Run: which crictl
	I0120 12:31:39.079028  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.079059  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.081555  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.083015  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.130647  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.130694  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.186867  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.186961  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.186966  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.209991  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.210008  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.246249  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 12:31:39.246259  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.321520  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.321580  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.336397  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 12:31:39.361423  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 12:31:39.361625  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 12:31:39.382747  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 12:31:39.382804  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 12:31:39.434483  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 12:31:39.434505  993585 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 12:31:39.494972  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 12:31:39.495045  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 12:31:39.520487  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 12:31:39.520534  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 12:31:39.529832  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 12:31:39.530428  993585 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 12:31:39.865446  993585 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:31:40.001428  993585 cache_images.go:92] duration metric: took 1.327395723s to LoadCachedImages
	W0120 12:31:40.001521  993585 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20151-942401/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0120 12:31:40.001540  993585 kubeadm.go:934] updating node { 192.168.50.250 8443 v1.20.0 crio true true} ...
	I0120 12:31:40.001666  993585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-134433 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:31:40.001759  993585 ssh_runner.go:195] Run: crio config
	I0120 12:31:40.049768  993585 cni.go:84] Creating CNI manager for ""
	I0120 12:31:40.049788  993585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:31:40.049798  993585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:31:40.049817  993585 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.250 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-134433 NodeName:old-k8s-version-134433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:31:40.049953  993585 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-134433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:31:40.050035  993585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:31:40.060513  993585 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:31:40.060576  993585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:31:40.070416  993585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 12:31:40.086321  993585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:31:40.101428  993585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 12:31:40.118688  993585 ssh_runner.go:195] Run: grep 192.168.50.250	control-plane.minikube.internal$ /etc/hosts
	I0120 12:31:40.122319  993585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:31:40.133757  993585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:31:40.267585  993585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:31:40.285307  993585 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433 for IP: 192.168.50.250
	I0120 12:31:40.285334  993585 certs.go:194] generating shared ca certs ...
	I0120 12:31:40.285359  993585 certs.go:226] acquiring lock for ca certs: {Name:mk7d6f61e0c358ebc451db1b73619131ab80bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.285629  993585 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key
	I0120 12:31:40.285712  993585 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key
	I0120 12:31:40.285729  993585 certs.go:256] generating profile certs ...
	I0120 12:31:40.285868  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.key
	I0120 12:31:40.320727  993585 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key.6d656c93
	I0120 12:31:40.320836  993585 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key
	I0120 12:31:40.321012  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem (1338 bytes)
	W0120 12:31:40.321045  993585 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656_empty.pem, impossibly tiny 0 bytes
	I0120 12:31:40.321055  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 12:31:40.321077  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/ca.pem (1078 bytes)
	I0120 12:31:40.321112  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:31:40.321133  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/certs/key.pem (1675 bytes)
	I0120 12:31:40.321173  993585 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem (1708 bytes)
	I0120 12:31:40.321820  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:31:40.355849  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:31:40.384987  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:31:40.412042  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 12:31:40.443057  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:31:40.487592  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:31:40.524256  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:31:40.548205  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:31:40.570407  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/certs/949656.pem --> /usr/share/ca-certificates/949656.pem (1338 bytes)
	I0120 12:31:40.594640  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/ssl/certs/9496562.pem --> /usr/share/ca-certificates/9496562.pem (1708 bytes)
	I0120 12:31:40.617736  993585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:31:40.642388  993585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:31:40.658180  993585 ssh_runner.go:195] Run: openssl version
	I0120 12:31:40.663613  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/949656.pem && ln -fs /usr/share/ca-certificates/949656.pem /etc/ssl/certs/949656.pem"
	I0120 12:31:40.673079  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677607  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:30 /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.677688  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/949656.pem
	I0120 12:31:40.684863  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/949656.pem /etc/ssl/certs/51391683.0"
	I0120 12:31:40.694838  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9496562.pem && ln -fs /usr/share/ca-certificates/9496562.pem /etc/ssl/certs/9496562.pem"
	I0120 12:31:40.704251  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708616  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:30 /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.708671  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9496562.pem
	I0120 12:31:40.714178  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9496562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:31:40.723770  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:31:40.733248  993585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737473  993585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:22 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.737526  993585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:31:40.742896  993585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:31:40.752426  993585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:31:40.756579  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:31:40.761769  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:31:40.766935  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:31:40.772427  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:31:40.777720  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:31:40.782945  993585 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:31:40.788029  993585 kubeadm.go:392] StartCluster: {Name:old-k8s-version-134433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-134433 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.250 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:31:40.788161  993585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:31:40.788202  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.825500  993585 cri.go:89] found id: ""
	I0120 12:31:40.825563  993585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:31:40.835567  993585 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:31:40.835588  993585 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:31:40.835635  993585 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:31:40.845152  993585 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:31:40.845853  993585 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-134433" does not appear in /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:31:40.846275  993585 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-942401/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-134433" cluster setting kubeconfig missing "old-k8s-version-134433" context setting]
	I0120 12:31:40.846897  993585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:31:40.937033  993585 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:31:40.947319  993585 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.250
	I0120 12:31:40.947380  993585 kubeadm.go:1160] stopping kube-system containers ...
	I0120 12:31:40.947395  993585 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 12:31:40.947453  993585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:31:40.984392  993585 cri.go:89] found id: ""
	I0120 12:31:40.984458  993585 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 12:31:41.001578  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:31:41.011794  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:31:41.011819  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:31:41.011875  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:31:41.021463  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:31:41.021518  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:31:41.030836  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:31:41.040645  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:31:41.040698  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:31:41.049821  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.058040  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:31:41.058097  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:31:41.066553  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:31:41.075225  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:31:41.075281  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:31:41.084906  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:31:41.093515  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.210064  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.666359  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:41.900869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:39.144316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.165382  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.817405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.185212  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:41.468125  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.966550  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:42.000812  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 12:31:42.089692  993585 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:31:42.089772  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:42.590338  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.090787  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.590769  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.090319  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:44.590108  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.089838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:45.590766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.089997  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:46.590717  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:43.642362  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:46.140694  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.684419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:48.185535  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.967037  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.967799  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.468120  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.090580  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:47.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.090251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.589947  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.090785  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:49.590768  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.090614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:50.590558  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.090311  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:51.590228  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:48.141706  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.641289  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.684323  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.684538  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.968580  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:55.466922  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.090647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.090104  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:53.590691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.090868  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:54.590219  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.090350  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:55.590003  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.090726  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:56.590283  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:52.641982  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.643173  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.142153  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.685013  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.186057  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.967658  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.968521  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:57.089873  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:57.590850  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.090780  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:58.590614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.090635  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.590451  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.090701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:00.590640  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.090753  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:01.590644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:31:59.640970  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.641596  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:59.684870  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.685889  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.185105  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.466874  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:04.467851  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:02.089853  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:02.590807  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.089981  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.590808  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.090857  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:04.590757  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.089933  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:05.590271  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.090623  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:06.590064  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:03.644442  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.140708  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.185872  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.683979  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.468061  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.966912  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:07.090783  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:07.589932  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.090055  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.590241  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.089915  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:09.590298  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.089954  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:10.590262  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.090497  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:11.590292  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:08.142135  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.142823  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:10.685405  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.184959  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:11.467184  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.966687  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:12.090562  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.590135  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.090747  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:13.590675  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.089959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:14.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.090313  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:15.590672  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.090234  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:16.590838  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:12.641948  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.141465  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.685252  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.685468  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.968298  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:18.466913  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.589874  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.089914  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:18.589959  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.090841  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:19.590272  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.090818  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:20.590893  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.090436  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:21.590656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:17.641252  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:19.642645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.140826  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.184125  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.184670  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.184995  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.967285  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.967592  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:25.467420  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.090802  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:22.589928  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.090636  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:23.590707  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.090639  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.590650  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.089995  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:25.590660  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.090132  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:26.590033  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:24.141192  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.641799  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.684732  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.185287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.467860  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:29.967353  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:27.090577  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:27.590867  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.090984  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.590845  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.090300  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:29.590066  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.090684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:30.590040  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.090303  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:31.590795  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:28.642020  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.141741  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.685583  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.184568  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.967618  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.468025  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:32.090206  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:32.590714  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.090718  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.590378  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.090656  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:34.590435  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.090317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:35.590516  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.090582  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:36.589956  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:33.142049  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.142316  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.185027  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.684930  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:36.967096  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:39.467542  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:37.090078  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.590663  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.090428  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:38.590162  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.089913  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:39.590888  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.090661  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:40.590041  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.090883  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:41.590739  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:37.641649  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.140763  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.141742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.686049  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:43.188216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:41.966891  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:44.467792  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.090408  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.090485  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.129790  993585 cri.go:89] found id: ""
	I0120 12:32:42.129819  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.129826  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:42.129832  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.129887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.160523  993585 cri.go:89] found id: ""
	I0120 12:32:42.160546  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.160555  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:42.160560  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.160606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.194768  993585 cri.go:89] found id: ""
	I0120 12:32:42.194796  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.194803  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:42.194808  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.194878  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.226406  993585 cri.go:89] found id: ""
	I0120 12:32:42.226435  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.226443  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:42.226448  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.226497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:42.263295  993585 cri.go:89] found id: ""
	I0120 12:32:42.263328  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.263352  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:42.263362  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:42.263419  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:42.293754  993585 cri.go:89] found id: ""
	I0120 12:32:42.293784  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.293794  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:42.293803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:42.293866  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:42.327600  993585 cri.go:89] found id: ""
	I0120 12:32:42.327631  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.327642  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:42.327650  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:42.327702  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:42.356668  993585 cri.go:89] found id: ""
	I0120 12:32:42.356698  993585 logs.go:282] 0 containers: []
	W0120 12:32:42.356710  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:42.356722  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:42.356734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:42.405030  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:42.405063  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:42.417663  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:42.417690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:42.538067  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:42.538100  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:42.538122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:42.607706  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:42.607743  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:45.149684  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:45.161947  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:45.162031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:45.204014  993585 cri.go:89] found id: ""
	I0120 12:32:45.204049  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.204060  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:45.204068  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:45.204129  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:45.245164  993585 cri.go:89] found id: ""
	I0120 12:32:45.245196  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.245206  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:45.245214  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:45.245278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:45.285368  993585 cri.go:89] found id: ""
	I0120 12:32:45.285401  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.285412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:45.285420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:45.285482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:45.322496  993585 cri.go:89] found id: ""
	I0120 12:32:45.322551  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.322564  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:45.322573  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:45.322632  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:45.353693  993585 cri.go:89] found id: ""
	I0120 12:32:45.353723  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.353731  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:45.353737  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:45.353786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:45.385705  993585 cri.go:89] found id: ""
	I0120 12:32:45.385735  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.385744  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:45.385750  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:45.385800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:45.419199  993585 cri.go:89] found id: ""
	I0120 12:32:45.419233  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.419243  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:45.419251  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:45.419317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:45.453757  993585 cri.go:89] found id: ""
	I0120 12:32:45.453789  993585 logs.go:282] 0 containers: []
	W0120 12:32:45.453800  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:45.453813  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:45.453828  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:45.502873  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:45.502902  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:45.515215  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:45.515240  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:45.581415  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:45.581443  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:45.581458  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:45.665418  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:45.665450  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:44.641564  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.642075  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:45.685384  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.184725  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:46.967382  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.971509  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:48.203193  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:48.215966  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:48.216028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:48.247173  993585 cri.go:89] found id: ""
	I0120 12:32:48.247201  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.247212  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:48.247219  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:48.247280  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:48.279393  993585 cri.go:89] found id: ""
	I0120 12:32:48.279421  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.279428  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:48.279434  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:48.279488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:48.310392  993585 cri.go:89] found id: ""
	I0120 12:32:48.310416  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.310423  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:48.310429  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:48.310473  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:48.342762  993585 cri.go:89] found id: ""
	I0120 12:32:48.342794  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.342803  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:48.342811  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:48.342869  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:48.373905  993585 cri.go:89] found id: ""
	I0120 12:32:48.373931  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.373942  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:48.373952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:48.374016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:48.406406  993585 cri.go:89] found id: ""
	I0120 12:32:48.406435  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.406443  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:48.406449  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:48.406494  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:48.442695  993585 cri.go:89] found id: ""
	I0120 12:32:48.442728  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.442738  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:48.442746  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:48.442813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:48.474459  993585 cri.go:89] found id: ""
	I0120 12:32:48.474485  993585 logs.go:282] 0 containers: []
	W0120 12:32:48.474494  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:48.474506  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:48.474535  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:48.522305  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:48.522337  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:48.535295  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:48.535322  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:48.605460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.605493  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:48.605510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:48.689980  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:48.690012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.228008  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:51.240647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:51.240708  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:51.274219  993585 cri.go:89] found id: ""
	I0120 12:32:51.274255  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.274267  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:51.274275  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:51.274347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:51.307904  993585 cri.go:89] found id: ""
	I0120 12:32:51.307930  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.307939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:51.307948  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:51.308000  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:51.342253  993585 cri.go:89] found id: ""
	I0120 12:32:51.342280  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.342288  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:51.342294  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:51.342340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:51.372185  993585 cri.go:89] found id: ""
	I0120 12:32:51.372211  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.372218  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:51.372224  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:51.372268  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:51.402807  993585 cri.go:89] found id: ""
	I0120 12:32:51.402840  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.402851  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:51.402858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:51.402932  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:51.434101  993585 cri.go:89] found id: ""
	I0120 12:32:51.434129  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.434139  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:51.434147  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:51.434217  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:51.467394  993585 cri.go:89] found id: ""
	I0120 12:32:51.467422  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.467431  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:51.467438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:51.467505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:51.498551  993585 cri.go:89] found id: ""
	I0120 12:32:51.498582  993585 logs.go:282] 0 containers: []
	W0120 12:32:51.498592  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:51.498604  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:51.498619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:51.577501  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:51.577533  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:51.618784  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:51.618825  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:51.671630  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:51.671667  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:51.685726  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:51.685750  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:51.751392  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:48.642162  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.142915  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:50.685157  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.185189  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:51.468237  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:53.967177  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:54.251524  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.265218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.265281  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.299773  993585 cri.go:89] found id: ""
	I0120 12:32:54.299804  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.299813  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:54.299820  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.299867  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.330432  993585 cri.go:89] found id: ""
	I0120 12:32:54.330461  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.330471  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:54.330479  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.330565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.366364  993585 cri.go:89] found id: ""
	I0120 12:32:54.366400  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.366412  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:54.366420  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:54.366480  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:54.398373  993585 cri.go:89] found id: ""
	I0120 12:32:54.398407  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.398417  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:54.398425  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:54.398486  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:54.437033  993585 cri.go:89] found id: ""
	I0120 12:32:54.437064  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.437074  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:54.437081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:54.437141  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:54.475179  993585 cri.go:89] found id: ""
	I0120 12:32:54.475203  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.475211  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:54.475218  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:54.475276  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:54.507372  993585 cri.go:89] found id: ""
	I0120 12:32:54.507410  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.507420  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:54.507428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:54.507484  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:54.538317  993585 cri.go:89] found id: ""
	I0120 12:32:54.538351  993585 logs.go:282] 0 containers: []
	W0120 12:32:54.538362  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:54.538379  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:54.538400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:54.620638  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:54.620683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:54.657830  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:54.657859  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:54.707420  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:54.707448  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:54.719611  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:54.719640  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:54.784727  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:53.643750  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.141402  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:55.684905  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.686081  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:56.467036  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:58.468431  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.469379  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:57.285771  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:57.298606  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:57.298677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:57.330216  993585 cri.go:89] found id: ""
	I0120 12:32:57.330245  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.330254  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:32:57.330260  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:32:57.330317  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:57.362111  993585 cri.go:89] found id: ""
	I0120 12:32:57.362152  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.362162  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:32:57.362169  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:32:57.362220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:57.395597  993585 cri.go:89] found id: ""
	I0120 12:32:57.395624  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.395634  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:32:57.395640  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:57.395700  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:57.425897  993585 cri.go:89] found id: ""
	I0120 12:32:57.425925  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.425933  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:32:57.425939  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:57.425986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:57.458500  993585 cri.go:89] found id: ""
	I0120 12:32:57.458544  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.458554  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:32:57.458563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:57.458625  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:57.489583  993585 cri.go:89] found id: ""
	I0120 12:32:57.489616  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.489626  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:32:57.489634  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:57.489685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:57.520588  993585 cri.go:89] found id: ""
	I0120 12:32:57.520617  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.520624  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:32:57.520630  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:57.520676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:57.555799  993585 cri.go:89] found id: ""
	I0120 12:32:57.555824  993585 logs.go:282] 0 containers: []
	W0120 12:32:57.555833  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:32:57.555843  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:57.555855  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:32:57.605038  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:57.605071  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:57.619575  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:57.619603  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:32:57.686685  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:32:57.686703  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:32:57.686731  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:32:57.762968  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:32:57.763003  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:00.306647  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:00.321029  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:00.321083  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:00.355924  993585 cri.go:89] found id: ""
	I0120 12:33:00.355954  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.355963  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:00.355969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:00.356021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:00.390766  993585 cri.go:89] found id: ""
	I0120 12:33:00.390793  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.390801  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:00.390807  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:00.390855  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:00.424790  993585 cri.go:89] found id: ""
	I0120 12:33:00.424820  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.424828  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:00.424833  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:00.424880  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:00.454941  993585 cri.go:89] found id: ""
	I0120 12:33:00.454975  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.454987  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:00.454995  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:00.455056  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:00.488642  993585 cri.go:89] found id: ""
	I0120 12:33:00.488670  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.488679  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:00.488684  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:00.488731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:00.518470  993585 cri.go:89] found id: ""
	I0120 12:33:00.518501  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.518511  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:00.518535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:00.518595  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:00.554139  993585 cri.go:89] found id: ""
	I0120 12:33:00.554167  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.554174  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:00.554180  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:00.554236  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:00.587766  993585 cri.go:89] found id: ""
	I0120 12:33:00.587792  993585 logs.go:282] 0 containers: []
	W0120 12:33:00.587799  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:00.587809  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:00.587821  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:00.639504  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:00.639541  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:00.651660  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:00.651687  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:00.725669  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:00.725697  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:00.725716  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:00.806460  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:00.806496  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:58.642200  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:01.142620  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:00.184931  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.684980  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:02.967537  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:05.467661  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:03.341420  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:03.354948  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:03.355022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:03.389867  993585 cri.go:89] found id: ""
	I0120 12:33:03.389965  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.389977  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:03.389986  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:03.390042  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:03.421478  993585 cri.go:89] found id: ""
	I0120 12:33:03.421505  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.421517  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:03.421525  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:03.421593  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:03.453805  993585 cri.go:89] found id: ""
	I0120 12:33:03.453838  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.453850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:03.453858  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:03.453917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:03.487503  993585 cri.go:89] found id: ""
	I0120 12:33:03.487536  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.487547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:03.487555  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:03.487621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:03.517560  993585 cri.go:89] found id: ""
	I0120 12:33:03.517585  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.517594  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:03.517602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:03.517661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:03.547328  993585 cri.go:89] found id: ""
	I0120 12:33:03.547368  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.547380  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:03.547389  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:03.547447  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:03.580215  993585 cri.go:89] found id: ""
	I0120 12:33:03.580242  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.580251  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:03.580256  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:03.580319  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:03.613176  993585 cri.go:89] found id: ""
	I0120 12:33:03.613208  993585 logs.go:282] 0 containers: []
	W0120 12:33:03.613220  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:03.613233  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:03.613247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:03.667093  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:03.667129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:03.680234  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:03.680260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:03.744763  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:03.744788  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:03.744805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.824813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:03.824856  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.364296  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:06.377247  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:06.377314  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:06.408701  993585 cri.go:89] found id: ""
	I0120 12:33:06.408725  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.408733  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:06.408738  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:06.408800  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:06.440716  993585 cri.go:89] found id: ""
	I0120 12:33:06.440744  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.440752  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:06.440758  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:06.440811  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:06.471832  993585 cri.go:89] found id: ""
	I0120 12:33:06.471866  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.471877  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:06.471884  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:06.471947  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:06.504122  993585 cri.go:89] found id: ""
	I0120 12:33:06.504149  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.504157  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:06.504163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:06.504214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:06.535353  993585 cri.go:89] found id: ""
	I0120 12:33:06.535386  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.535397  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:06.535405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:06.535460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:06.571284  993585 cri.go:89] found id: ""
	I0120 12:33:06.571309  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.571316  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:06.571322  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:06.571379  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:06.604008  993585 cri.go:89] found id: ""
	I0120 12:33:06.604042  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.604055  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:06.604062  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:06.604142  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:06.636221  993585 cri.go:89] found id: ""
	I0120 12:33:06.636258  993585 logs.go:282] 0 containers: []
	W0120 12:33:06.636270  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:06.636284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:06.636299  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:06.671820  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:06.671845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:06.723338  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:06.723369  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:06.736258  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:06.736285  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:06.807310  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:06.807336  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:06.807352  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:03.642811  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:06.141374  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:04.685422  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.184287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.185215  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:07.469260  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.967169  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:09.386909  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:09.399300  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:09.399363  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:09.431976  993585 cri.go:89] found id: ""
	I0120 12:33:09.432013  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.432025  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:09.432032  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:09.432085  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:09.468016  993585 cri.go:89] found id: ""
	I0120 12:33:09.468042  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.468053  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:09.468061  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:09.468124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:09.501613  993585 cri.go:89] found id: ""
	I0120 12:33:09.501648  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.501657  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:09.501667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:09.501734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:09.535261  993585 cri.go:89] found id: ""
	I0120 12:33:09.535296  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.535308  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:09.535315  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:09.535382  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:09.569838  993585 cri.go:89] found id: ""
	I0120 12:33:09.569873  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.569885  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:09.569893  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:09.569961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:09.601673  993585 cri.go:89] found id: ""
	I0120 12:33:09.601701  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.601709  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:09.601714  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:09.601773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:09.638035  993585 cri.go:89] found id: ""
	I0120 12:33:09.638068  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.638080  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:09.638089  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:09.638155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:09.671128  993585 cri.go:89] found id: ""
	I0120 12:33:09.671149  993585 logs.go:282] 0 containers: []
	W0120 12:33:09.671156  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:09.671165  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:09.671178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:09.723616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:09.723648  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:09.737987  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:09.738020  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:09.810583  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:09.810613  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:09.810627  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:09.887641  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:09.887676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:08.141896  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:10.642250  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.685128  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.686705  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:11.968039  992109 pod_ready.go:103] pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:13.962039  992109 pod_ready.go:82] duration metric: took 4m0.001004044s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" ...
	E0120 12:33:13.962067  992109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-4zkcz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:33:13.962099  992109 pod_ready.go:39] duration metric: took 4m14.545589853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:13.962140  992109 kubeadm.go:597] duration metric: took 4m21.118193658s to restartPrimaryControlPlane
	W0120 12:33:13.962239  992109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:33:13.962281  992109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:33:12.423728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:12.437277  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:12.437368  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:12.470427  993585 cri.go:89] found id: ""
	I0120 12:33:12.470455  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.470463  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:12.470468  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:12.470546  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:12.501063  993585 cri.go:89] found id: ""
	I0120 12:33:12.501103  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.501130  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:12.501138  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:12.501287  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:12.535254  993585 cri.go:89] found id: ""
	I0120 12:33:12.535284  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.535295  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:12.535303  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:12.535354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:12.568250  993585 cri.go:89] found id: ""
	I0120 12:33:12.568289  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.568301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:12.568307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:12.568372  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:12.599927  993585 cri.go:89] found id: ""
	I0120 12:33:12.599961  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.599970  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:12.599976  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:12.600031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:12.632502  993585 cri.go:89] found id: ""
	I0120 12:33:12.632537  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.632549  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:12.632559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:12.632620  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:12.664166  993585 cri.go:89] found id: ""
	I0120 12:33:12.664200  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.664208  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:12.664216  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:12.664270  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:12.697996  993585 cri.go:89] found id: ""
	I0120 12:33:12.698028  993585 logs.go:282] 0 containers: []
	W0120 12:33:12.698039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:12.698054  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:12.698070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:12.751712  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:12.751745  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:12.765184  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:12.765213  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:12.830999  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:12.831027  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:12.831046  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:12.911211  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:12.911244  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:15.449634  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:15.464863  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:15.464931  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:15.495576  993585 cri.go:89] found id: ""
	I0120 12:33:15.495609  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.495620  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:15.495629  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:15.495689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:15.525730  993585 cri.go:89] found id: ""
	I0120 12:33:15.525757  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.525767  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:15.525775  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:15.525832  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:15.556077  993585 cri.go:89] found id: ""
	I0120 12:33:15.556117  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.556127  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:15.556135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:15.556195  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:15.585820  993585 cri.go:89] found id: ""
	I0120 12:33:15.585852  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.585860  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:15.585867  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:15.585924  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:15.615985  993585 cri.go:89] found id: ""
	I0120 12:33:15.616027  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.616035  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:15.616041  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:15.616093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:15.648570  993585 cri.go:89] found id: ""
	I0120 12:33:15.648604  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.648611  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:15.648617  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:15.648664  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:15.678674  993585 cri.go:89] found id: ""
	I0120 12:33:15.678704  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.678714  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:15.678721  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:15.678786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:15.708444  993585 cri.go:89] found id: ""
	I0120 12:33:15.708468  993585 logs.go:282] 0 containers: []
	W0120 12:33:15.708476  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:15.708485  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:15.708500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:15.758053  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:15.758083  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:15.770661  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:15.770688  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:15.833234  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:15.833257  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:15.833271  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:15.906939  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:15.906969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:13.142031  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:15.642742  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:16.184659  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.185053  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:18.442922  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:18.455489  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:18.455557  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:18.495102  993585 cri.go:89] found id: ""
	I0120 12:33:18.495135  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.495145  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:18.495154  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:18.495225  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:18.530047  993585 cri.go:89] found id: ""
	I0120 12:33:18.530078  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.530094  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:18.530102  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:18.530165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:18.566556  993585 cri.go:89] found id: ""
	I0120 12:33:18.566585  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.566595  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:18.566602  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:18.566661  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:18.604783  993585 cri.go:89] found id: ""
	I0120 12:33:18.604819  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.604834  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:18.604842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:18.604913  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:18.638998  993585 cri.go:89] found id: ""
	I0120 12:33:18.639025  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.639036  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:18.639043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:18.639107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:18.669083  993585 cri.go:89] found id: ""
	I0120 12:33:18.669121  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.669130  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:18.669136  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:18.669192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:18.701062  993585 cri.go:89] found id: ""
	I0120 12:33:18.701089  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.701097  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:18.701115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:18.701180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:18.732086  993585 cri.go:89] found id: ""
	I0120 12:33:18.732131  993585 logs.go:282] 0 containers: []
	W0120 12:33:18.732142  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:18.732157  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:18.732174  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:18.779325  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:18.779357  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:18.792530  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:18.792565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:18.863429  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:18.863452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:18.863464  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:18.941343  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:18.941375  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:21.481380  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:21.493618  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:21.493699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:21.524040  993585 cri.go:89] found id: ""
	I0120 12:33:21.524067  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.524075  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:21.524081  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:21.524149  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:21.554666  993585 cri.go:89] found id: ""
	I0120 12:33:21.554698  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.554708  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:21.554715  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:21.554762  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:21.585584  993585 cri.go:89] found id: ""
	I0120 12:33:21.585610  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.585617  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:21.585623  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:21.585670  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:21.615611  993585 cri.go:89] found id: ""
	I0120 12:33:21.615646  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.615657  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:21.615666  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:21.615715  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:21.646761  993585 cri.go:89] found id: ""
	I0120 12:33:21.646788  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.646796  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:21.646801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:21.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:21.681380  993585 cri.go:89] found id: ""
	I0120 12:33:21.681410  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.681420  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:21.681428  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:21.681488  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:21.712708  993585 cri.go:89] found id: ""
	I0120 12:33:21.712743  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.712759  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:21.712766  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:21.712828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:21.746105  993585 cri.go:89] found id: ""
	I0120 12:33:21.746132  993585 logs.go:282] 0 containers: []
	W0120 12:33:21.746140  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:21.746150  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:21.746162  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:21.795702  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:21.795744  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:21.807548  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:21.807570  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:21.869605  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:21.869627  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:21.869646  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:21.941092  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:21.941120  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:18.142112  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.642242  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:20.185265  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:22.684404  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.487520  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:24.501031  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:24.501119  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:24.533191  993585 cri.go:89] found id: ""
	I0120 12:33:24.533220  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.533230  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:24.533237  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:24.533300  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:24.565809  993585 cri.go:89] found id: ""
	I0120 12:33:24.565837  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.565845  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:24.565850  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:24.565902  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:24.600607  993585 cri.go:89] found id: ""
	I0120 12:33:24.600643  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.600655  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:24.600663  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:24.600742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:24.637320  993585 cri.go:89] found id: ""
	I0120 12:33:24.637354  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.637365  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:24.637373  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:24.637433  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:24.674906  993585 cri.go:89] found id: ""
	I0120 12:33:24.674940  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.674952  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:24.674960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:24.675024  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:24.707058  993585 cri.go:89] found id: ""
	I0120 12:33:24.707084  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.707091  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:24.707097  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:24.707159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:24.740554  993585 cri.go:89] found id: ""
	I0120 12:33:24.740590  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.740603  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:24.740614  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:24.740680  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:24.773021  993585 cri.go:89] found id: ""
	I0120 12:33:24.773052  993585 logs.go:282] 0 containers: []
	W0120 12:33:24.773064  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:24.773077  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:24.773094  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:24.863129  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:24.863156  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:24.863169  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:24.939479  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:24.939516  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:24.975325  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:24.975358  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:25.026952  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:25.026993  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:23.141922  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:25.142300  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:24.685216  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:26.687261  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:29.183496  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:27.539957  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:27.553387  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:27.553449  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:27.587773  993585 cri.go:89] found id: ""
	I0120 12:33:27.587804  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.587812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:27.587818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:27.587868  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:27.617735  993585 cri.go:89] found id: ""
	I0120 12:33:27.617767  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.617777  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:27.617785  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:27.617865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:27.652958  993585 cri.go:89] found id: ""
	I0120 12:33:27.652978  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.652985  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:27.652990  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:27.653047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:27.686924  993585 cri.go:89] found id: ""
	I0120 12:33:27.686947  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.686954  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:27.686960  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:27.687012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:27.720217  993585 cri.go:89] found id: ""
	I0120 12:33:27.720246  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.720258  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:27.720265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:27.720334  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:27.757382  993585 cri.go:89] found id: ""
	I0120 12:33:27.757418  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.757430  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:27.757438  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:27.757504  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:27.788498  993585 cri.go:89] found id: ""
	I0120 12:33:27.788528  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.788538  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:27.788546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:27.788616  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:27.820146  993585 cri.go:89] found id: ""
	I0120 12:33:27.820178  993585 logs.go:282] 0 containers: []
	W0120 12:33:27.820186  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:27.820196  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:27.820207  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:27.832201  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:27.832225  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:27.905179  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:27.905202  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:27.905227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:27.984792  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:27.984829  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:28.027290  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:28.027397  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.578691  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:30.591302  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:30.591365  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:30.627747  993585 cri.go:89] found id: ""
	I0120 12:33:30.627775  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.627802  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:30.627810  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:30.627881  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:30.674653  993585 cri.go:89] found id: ""
	I0120 12:33:30.674684  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.674694  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:30.674702  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:30.674766  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:30.716811  993585 cri.go:89] found id: ""
	I0120 12:33:30.716839  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.716850  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:30.716857  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:30.716922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:30.749623  993585 cri.go:89] found id: ""
	I0120 12:33:30.749655  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.749666  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:30.749674  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:30.749742  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:30.780140  993585 cri.go:89] found id: ""
	I0120 12:33:30.780172  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.780180  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:30.780186  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:30.780241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:30.808356  993585 cri.go:89] found id: ""
	I0120 12:33:30.808387  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.808395  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:30.808407  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:30.808476  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:30.842019  993585 cri.go:89] found id: ""
	I0120 12:33:30.842047  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.842054  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:30.842060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:30.842109  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:30.871526  993585 cri.go:89] found id: ""
	I0120 12:33:30.871551  993585 logs.go:282] 0 containers: []
	W0120 12:33:30.871559  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:30.871568  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:30.871581  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:30.919022  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:30.919051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:30.931897  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:30.931933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:30.993261  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:30.993282  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:30.993296  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:31.069346  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:31.069384  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:27.642074  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:30.142170  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:31.184534  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.184696  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:33.606755  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:33.619163  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:33.619232  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:33.654390  993585 cri.go:89] found id: ""
	I0120 12:33:33.654423  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.654432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:33.654438  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:33.654487  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:33.689183  993585 cri.go:89] found id: ""
	I0120 12:33:33.689218  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.689230  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:33.689239  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:33.689302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:33.720803  993585 cri.go:89] found id: ""
	I0120 12:33:33.720832  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.720839  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:33.720845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:33.720893  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:33.755948  993585 cri.go:89] found id: ""
	I0120 12:33:33.755985  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.755995  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:33.756003  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:33.756071  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:33.788407  993585 cri.go:89] found id: ""
	I0120 12:33:33.788444  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.788457  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:33.788466  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:33.788524  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:33.819077  993585 cri.go:89] found id: ""
	I0120 12:33:33.819102  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.819109  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:33.819115  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:33.819164  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:33.848263  993585 cri.go:89] found id: ""
	I0120 12:33:33.848288  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.848296  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:33.848301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:33.848347  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:33.877393  993585 cri.go:89] found id: ""
	I0120 12:33:33.877428  993585 logs.go:282] 0 containers: []
	W0120 12:33:33.877439  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:33.877451  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:33.877462  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:33.928766  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:33.928796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:33.941450  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:33.941474  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:34.004416  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:34.004446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:34.004461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:34.079056  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:34.079088  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:36.622644  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:36.634862  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:36.634939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:36.670074  993585 cri.go:89] found id: ""
	I0120 12:33:36.670113  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.670124  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:36.670132  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:36.670189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:36.706117  993585 cri.go:89] found id: ""
	I0120 12:33:36.706152  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.706159  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:36.706164  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:36.706219  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:36.741133  993585 cri.go:89] found id: ""
	I0120 12:33:36.741167  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.741177  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:36.741185  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:36.741242  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:36.773791  993585 cri.go:89] found id: ""
	I0120 12:33:36.773819  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.773830  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:36.773837  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:36.773901  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:36.807401  993585 cri.go:89] found id: ""
	I0120 12:33:36.807432  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.807440  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:36.807447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:36.807500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:36.839815  993585 cri.go:89] found id: ""
	I0120 12:33:36.839850  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.839861  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:36.839870  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:36.839934  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:36.868579  993585 cri.go:89] found id: ""
	I0120 12:33:36.868610  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.868620  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:36.868626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:36.868685  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:36.898430  993585 cri.go:89] found id: ""
	I0120 12:33:36.898455  993585 logs.go:282] 0 containers: []
	W0120 12:33:36.898462  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:36.898475  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:36.898490  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:36.947718  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:36.947758  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:32.641645  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.141557  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.141719  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:35.684708  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:37.685419  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:36.962705  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:36.962740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:37.053761  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:37.053792  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:37.053805  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:37.148364  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:37.148400  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.690060  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:39.702447  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:39.702516  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:39.733846  993585 cri.go:89] found id: ""
	I0120 12:33:39.733868  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.733876  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:39.733883  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:39.733939  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:39.762657  993585 cri.go:89] found id: ""
	I0120 12:33:39.762682  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.762690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:39.762695  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:39.762743  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:39.794803  993585 cri.go:89] found id: ""
	I0120 12:33:39.794832  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.794841  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:39.794847  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:39.794891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:39.823584  993585 cri.go:89] found id: ""
	I0120 12:33:39.823614  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.823625  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:39.823633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:39.823689  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:39.851954  993585 cri.go:89] found id: ""
	I0120 12:33:39.851978  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.851985  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:39.851991  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:39.852091  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:39.881315  993585 cri.go:89] found id: ""
	I0120 12:33:39.881347  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.881358  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:39.881367  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:39.881428  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:39.911797  993585 cri.go:89] found id: ""
	I0120 12:33:39.911827  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.911836  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:39.911841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:39.911887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:39.941625  993585 cri.go:89] found id: ""
	I0120 12:33:39.941653  993585 logs.go:282] 0 containers: []
	W0120 12:33:39.941661  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:39.941671  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:39.941683  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:39.991689  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:39.991718  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:40.004850  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:40.004871  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:40.069863  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:40.069883  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:40.069894  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:40.149093  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:40.149129  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:39.142612  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.145567  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:40.184106  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:42.184765  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:41.582218  992109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.61991226s)
	I0120 12:33:41.582297  992109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:33:41.597367  992109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:33:41.606890  992109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:33:41.615799  992109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:33:41.615823  992109 kubeadm.go:157] found existing configuration files:
	
	I0120 12:33:41.615890  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:33:41.624548  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:33:41.624613  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:33:41.634296  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:33:41.645019  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:33:41.645069  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:33:41.653988  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.662620  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:33:41.662661  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:33:41.671164  992109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:33:41.679068  992109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:33:41.679121  992109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:33:41.687730  992109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:33:41.842158  992109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:33:42.692596  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:42.710550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:42.710636  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:42.761626  993585 cri.go:89] found id: ""
	I0120 12:33:42.761665  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.761677  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:42.761685  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:42.761753  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:42.825148  993585 cri.go:89] found id: ""
	I0120 12:33:42.825181  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.825191  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:42.825196  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:42.825258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:42.859035  993585 cri.go:89] found id: ""
	I0120 12:33:42.859066  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.859075  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:42.859081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:42.859134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:42.890335  993585 cri.go:89] found id: ""
	I0120 12:33:42.890364  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.890372  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:42.890378  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:42.890442  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:42.929857  993585 cri.go:89] found id: ""
	I0120 12:33:42.929882  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.929890  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:42.929896  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:42.929944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:42.960830  993585 cri.go:89] found id: ""
	I0120 12:33:42.960864  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.960874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:42.960882  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:42.960948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:42.995324  993585 cri.go:89] found id: ""
	I0120 12:33:42.995354  993585 logs.go:282] 0 containers: []
	W0120 12:33:42.995368  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:42.995374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:42.995424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:43.028259  993585 cri.go:89] found id: ""
	I0120 12:33:43.028286  993585 logs.go:282] 0 containers: []
	W0120 12:33:43.028294  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:43.028306  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:43.028316  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:43.079487  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:43.079517  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.091452  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:43.091475  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:43.153152  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:43.153178  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:43.153192  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:43.236284  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:43.236325  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:45.774706  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:45.791967  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:45.792052  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:45.824678  993585 cri.go:89] found id: ""
	I0120 12:33:45.824710  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.824720  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:45.824729  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:45.824793  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:45.857843  993585 cri.go:89] found id: ""
	I0120 12:33:45.857876  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.857885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:45.857891  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:45.857944  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:45.898182  993585 cri.go:89] found id: ""
	I0120 12:33:45.898215  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.898227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:45.898235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:45.898302  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:45.929223  993585 cri.go:89] found id: ""
	I0120 12:33:45.929259  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.929272  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:45.929282  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:45.929380  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:45.960800  993585 cri.go:89] found id: ""
	I0120 12:33:45.960849  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.960870  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:45.960879  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:45.960957  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:45.997846  993585 cri.go:89] found id: ""
	I0120 12:33:45.997878  993585 logs.go:282] 0 containers: []
	W0120 12:33:45.997889  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:45.997897  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:45.997969  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:46.033227  993585 cri.go:89] found id: ""
	I0120 12:33:46.033267  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.033278  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:46.033286  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:46.033354  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:46.066691  993585 cri.go:89] found id: ""
	I0120 12:33:46.066723  993585 logs.go:282] 0 containers: []
	W0120 12:33:46.066733  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:46.066746  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:46.066763  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:46.133257  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:46.133280  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:46.133293  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:46.232667  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:46.232720  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:46.274332  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:46.274371  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:46.327098  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:46.327142  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:43.642109  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:45.643138  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:44.686233  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:47.185408  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.186465  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:49.627545  992109 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:33:49.627631  992109 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:33:49.627743  992109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:33:49.627898  992109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:33:49.628021  992109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:33:49.628110  992109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:33:49.629521  992109 out.go:235]   - Generating certificates and keys ...
	I0120 12:33:49.629586  992109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:33:49.629652  992109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:33:49.629732  992109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:33:49.629811  992109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:33:49.629945  992109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:33:49.630101  992109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:33:49.630179  992109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:33:49.630255  992109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:33:49.630331  992109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:33:49.630426  992109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:33:49.630491  992109 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:33:49.630586  992109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:33:49.630669  992109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:33:49.630752  992109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:33:49.630819  992109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:33:49.630898  992109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:33:49.630946  992109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:33:49.631065  992109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:33:49.631148  992109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:33:49.632352  992109 out.go:235]   - Booting up control plane ...
	I0120 12:33:49.632439  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:33:49.632500  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:33:49.632581  992109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:33:49.632734  992109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:33:49.632818  992109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:33:49.632854  992109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:33:49.632972  992109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:33:49.633093  992109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:33:49.633183  992109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.459324ms
	I0120 12:33:49.633288  992109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:33:49.633376  992109 kubeadm.go:310] [api-check] The API server is healthy after 5.002077681s
	I0120 12:33:49.633495  992109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:33:49.633603  992109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:33:49.633652  992109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:33:49.633813  992109 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-496524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:33:49.633900  992109 kubeadm.go:310] [bootstrap-token] Using token: sww9nb.rwz5issf9tlw104y
	I0120 12:33:49.635315  992109 out.go:235]   - Configuring RBAC rules ...
	I0120 12:33:49.635441  992109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:33:49.635546  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:33:49.635673  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:33:49.635790  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:33:49.635890  992109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:33:49.635965  992109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:33:49.636063  992109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:33:49.636105  992109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:33:49.636151  992109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:33:49.636157  992109 kubeadm.go:310] 
	I0120 12:33:49.636247  992109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:33:49.636272  992109 kubeadm.go:310] 
	I0120 12:33:49.636388  992109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:33:49.636400  992109 kubeadm.go:310] 
	I0120 12:33:49.636441  992109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:33:49.636523  992109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:33:49.636598  992109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:33:49.636608  992109 kubeadm.go:310] 
	I0120 12:33:49.636714  992109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:33:49.636738  992109 kubeadm.go:310] 
	I0120 12:33:49.636800  992109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:33:49.636810  992109 kubeadm.go:310] 
	I0120 12:33:49.636874  992109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:33:49.636984  992109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:33:49.637071  992109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:33:49.637082  992109 kubeadm.go:310] 
	I0120 12:33:49.637206  992109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:33:49.637348  992109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:33:49.637365  992109 kubeadm.go:310] 
	I0120 12:33:49.637484  992109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.637627  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:33:49.637685  992109 kubeadm.go:310] 	--control-plane 
	I0120 12:33:49.637704  992109 kubeadm.go:310] 
	I0120 12:33:49.637810  992109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:33:49.637826  992109 kubeadm.go:310] 
	I0120 12:33:49.637934  992109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sww9nb.rwz5issf9tlw104y \
	I0120 12:33:49.638086  992109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:33:49.638103  992109 cni.go:84] Creating CNI manager for ""
	I0120 12:33:49.638112  992109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:33:49.639791  992109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:33:49.641114  992109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:33:49.651726  992109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:33:49.670543  992109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:33:49.670636  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:49.670688  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-496524 minikube.k8s.io/updated_at=2025_01_20T12_33_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-496524 minikube.k8s.io/primary=true
	I0120 12:33:49.704840  992109 ops.go:34] apiserver oom_adj: -16
	I0120 12:33:49.859209  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.359791  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:50.859509  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:48.841385  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:48.854037  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:48.854105  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:48.889959  993585 cri.go:89] found id: ""
	I0120 12:33:48.889996  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.890008  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:48.890017  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:48.890084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.926271  993585 cri.go:89] found id: ""
	I0120 12:33:48.926313  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.926326  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:48.926334  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:48.926409  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:48.962768  993585 cri.go:89] found id: ""
	I0120 12:33:48.962803  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.962816  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:48.962825  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:48.962895  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:48.998039  993585 cri.go:89] found id: ""
	I0120 12:33:48.998075  993585 logs.go:282] 0 containers: []
	W0120 12:33:48.998086  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:48.998093  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:48.998161  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:49.038710  993585 cri.go:89] found id: ""
	I0120 12:33:49.038745  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.038756  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:49.038765  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:49.038835  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:49.074829  993585 cri.go:89] found id: ""
	I0120 12:33:49.074863  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.074874  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:49.074883  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:49.074950  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:49.115354  993585 cri.go:89] found id: ""
	I0120 12:33:49.115383  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.115392  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:49.115397  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:49.115446  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:49.152837  993585 cri.go:89] found id: ""
	I0120 12:33:49.152870  993585 logs.go:282] 0 containers: []
	W0120 12:33:49.152880  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:49.152892  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:49.152906  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:49.194817  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:49.194842  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:49.247223  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:49.247255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:49.259939  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:49.259965  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:49.326047  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:49.326081  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:49.326108  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:51.904391  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:51.916726  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:51.916806  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:51.950574  993585 cri.go:89] found id: ""
	I0120 12:33:51.950602  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.950610  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:51.950619  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:51.950683  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:48.141455  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:50.142912  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:51.359718  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:51.859742  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.359728  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:52.859803  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.359731  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.859729  992109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:33:53.963052  992109 kubeadm.go:1113] duration metric: took 4.292471944s to wait for elevateKubeSystemPrivileges
	I0120 12:33:53.963109  992109 kubeadm.go:394] duration metric: took 5m1.161906665s to StartCluster
	I0120 12:33:53.963139  992109 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.963257  992109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:33:53.964929  992109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:33:53.965243  992109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:33:53.965321  992109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:33:53.965437  992109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-496524"
	I0120 12:33:53.965452  992109 addons.go:69] Setting dashboard=true in profile "no-preload-496524"
	I0120 12:33:53.965477  992109 addons.go:238] Setting addon storage-provisioner=true in "no-preload-496524"
	W0120 12:33:53.965487  992109 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:33:53.965490  992109 addons.go:238] Setting addon dashboard=true in "no-preload-496524"
	I0120 12:33:53.965481  992109 addons.go:69] Setting default-storageclass=true in profile "no-preload-496524"
	W0120 12:33:53.965502  992109 addons.go:247] addon dashboard should already be in state true
	I0120 12:33:53.965518  992109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-496524"
	I0120 12:33:53.965520  992109 config.go:182] Loaded profile config "no-preload-496524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:33:53.965528  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965534  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965514  992109 addons.go:69] Setting metrics-server=true in profile "no-preload-496524"
	I0120 12:33:53.965570  992109 addons.go:238] Setting addon metrics-server=true in "no-preload-496524"
	W0120 12:33:53.965584  992109 addons.go:247] addon metrics-server should already be in state true
	I0120 12:33:53.965628  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.965928  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965934  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965947  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.965963  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.965985  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966029  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.966054  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966065  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.966567  992109 out.go:177] * Verifying Kubernetes components...
	I0120 12:33:53.967881  992109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:33:53.983553  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0120 12:33:53.984079  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.984654  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.984681  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.985111  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.985353  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:53.986475  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0120 12:33:53.986716  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0120 12:33:53.987021  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987492  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.987571  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.987588  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.987741  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0120 12:33:53.987942  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.988075  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:53.988425  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988440  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988577  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.988627  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.988783  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:53.988797  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:53.988855  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989000  992109 addons.go:238] Setting addon default-storageclass=true in "no-preload-496524"
	W0120 12:33:53.989019  992109 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:33:53.989052  992109 host.go:66] Checking if "no-preload-496524" exists ...
	I0120 12:33:53.989187  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:53.989393  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989420  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989431  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989455  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:53.989672  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:53.989711  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.005609  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0120 12:33:54.006182  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.006760  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.006786  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.007131  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0120 12:33:54.007443  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.008065  992109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:33:54.008108  992109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:33:54.008308  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0120 12:33:54.008359  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.008993  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.009021  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.009407  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.009597  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.011591  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.013572  992109 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:33:54.014814  992109 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:33:54.015103  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.015538  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.015562  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.015921  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:33:54.015946  992109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:33:54.015970  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.015997  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.016619  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.018868  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.019948  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020370  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.020397  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.020522  992109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:33:54.020716  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.020885  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.020989  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.021095  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.021561  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:33:54.021576  992109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:33:54.021592  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.024577  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.024641  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024669  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.024695  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.024723  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.024878  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.025140  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.032584  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0120 12:33:54.032936  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.033474  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.033497  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.033809  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.034011  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.035349  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.035539  992109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.035557  992109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:33:54.035573  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.037812  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038056  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.038080  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.038193  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.038321  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.038429  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.038547  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.041727  992109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0120 12:33:54.042162  992109 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:33:54.042671  992109 main.go:141] libmachine: Using API Version  1
	I0120 12:33:54.042694  992109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:33:54.043048  992109 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:33:54.043263  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetState
	I0120 12:33:54.044523  992109 main.go:141] libmachine: (no-preload-496524) Calling .DriverName
	I0120 12:33:54.046748  992109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:33:51.190620  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:53.685783  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:54.048049  992109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.048070  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:33:54.048087  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHHostname
	I0120 12:33:54.050560  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051116  992109 main.go:141] libmachine: (no-preload-496524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:8f:cb", ip: ""} in network mk-no-preload-496524: {Iface:virbr3 ExpiryTime:2025-01-20 13:28:26 +0000 UTC Type:0 Mac:52:54:00:13:8f:cb Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-496524 Clientid:01:52:54:00:13:8f:cb}
	I0120 12:33:54.051143  992109 main.go:141] libmachine: (no-preload-496524) DBG | domain no-preload-496524 has defined IP address 192.168.61.107 and MAC address 52:54:00:13:8f:cb in network mk-no-preload-496524
	I0120 12:33:54.051300  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHPort
	I0120 12:33:54.051493  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHKeyPath
	I0120 12:33:54.051649  992109 main.go:141] libmachine: (no-preload-496524) Calling .GetSSHUsername
	I0120 12:33:54.051769  992109 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/no-preload-496524/id_rsa Username:docker}
	I0120 12:33:54.174035  992109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:33:54.197637  992109 node_ready.go:35] waiting up to 6m0s for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210713  992109 node_ready.go:49] node "no-preload-496524" has status "Ready":"True"
	I0120 12:33:54.210742  992109 node_ready.go:38] duration metric: took 13.074849ms for node "no-preload-496524" to be "Ready" ...
	I0120 12:33:54.210757  992109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:33:54.218615  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:54.300046  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:33:54.300080  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:33:54.351225  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:33:54.353768  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:33:54.353789  992109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:33:54.368467  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:33:54.368496  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:33:54.371467  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:33:54.389639  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:33:54.389660  992109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:33:54.401448  992109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.401467  992109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:33:54.465233  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:33:54.465824  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:33:54.465853  992109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:33:54.543139  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:33:54.543178  992109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:33:54.687210  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:33:54.687234  992109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:33:54.744978  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:33:54.745012  992109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:33:54.771298  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:33:54.771332  992109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:33:54.852878  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:33:54.852914  992109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:33:54.886329  992109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:54.886362  992109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:33:54.964102  992109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:33:55.906127  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.534613086s)
	I0120 12:33:55.906207  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906212  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.554946671s)
	I0120 12:33:55.906270  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.440998293s)
	I0120 12:33:55.906220  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906307  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906338  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906275  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906404  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906812  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.906854  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906855  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.906862  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906874  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.906877  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906883  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.906886  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.906893  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907039  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907058  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.907081  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.907090  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.907187  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.907189  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.907213  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908759  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:55.908766  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.908783  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.908801  992109 addons.go:479] Verifying addon metrics-server=true in "no-preload-496524"
	I0120 12:33:55.909118  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.909137  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:55.939415  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:55.939434  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:55.939756  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:55.939772  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.225171  992109 pod_ready.go:103] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.900293  992109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.936108167s)
	I0120 12:33:56.900402  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900428  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.900904  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.900913  992109 main.go:141] libmachine: (no-preload-496524) DBG | Closing plugin on server side
	I0120 12:33:56.900924  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.900945  992109 main.go:141] libmachine: Making call to close driver server
	I0120 12:33:56.900952  992109 main.go:141] libmachine: (no-preload-496524) Calling .Close
	I0120 12:33:56.901226  992109 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:33:56.901246  992109 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:33:56.902642  992109 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-496524 addons enable metrics-server
	
	I0120 12:33:56.904289  992109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0120 12:33:51.982905  993585 cri.go:89] found id: ""
	I0120 12:33:51.982931  993585 logs.go:282] 0 containers: []
	W0120 12:33:51.982939  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:51.982950  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:51.982998  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:52.017989  993585 cri.go:89] found id: ""
	I0120 12:33:52.018029  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.018041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:52.018049  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:52.018117  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:52.050405  993585 cri.go:89] found id: ""
	I0120 12:33:52.050432  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.050442  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:52.050450  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:52.050540  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:52.080729  993585 cri.go:89] found id: ""
	I0120 12:33:52.080760  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.080767  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:52.080773  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:52.080826  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:52.110809  993585 cri.go:89] found id: ""
	I0120 12:33:52.110839  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.110849  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:52.110856  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:52.110915  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:52.143357  993585 cri.go:89] found id: ""
	I0120 12:33:52.143387  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.143397  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:52.143405  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:52.143475  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:52.179555  993585 cri.go:89] found id: ""
	I0120 12:33:52.179584  993585 logs.go:282] 0 containers: []
	W0120 12:33:52.179594  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:52.179607  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:52.179622  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:52.268223  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:52.268257  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.304968  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:52.305008  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:52.354773  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:52.354811  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:52.366909  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:52.366933  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:52.434038  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:54.934844  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:54.954370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:54.954453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:54.987088  993585 cri.go:89] found id: ""
	I0120 12:33:54.987124  993585 logs.go:282] 0 containers: []
	W0120 12:33:54.987136  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:54.987144  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:54.987207  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:55.020248  993585 cri.go:89] found id: ""
	I0120 12:33:55.020282  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.020293  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:55.020301  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:55.020374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:55.059488  993585 cri.go:89] found id: ""
	I0120 12:33:55.059529  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.059541  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:55.059550  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:55.059614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:55.095049  993585 cri.go:89] found id: ""
	I0120 12:33:55.095088  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.095102  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:55.095112  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:55.095189  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:55.131993  993585 cri.go:89] found id: ""
	I0120 12:33:55.132028  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.132039  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:55.132045  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:55.132107  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:55.168716  993585 cri.go:89] found id: ""
	I0120 12:33:55.168744  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.168755  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:55.168764  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:55.168828  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:55.211532  993585 cri.go:89] found id: ""
	I0120 12:33:55.211566  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.211578  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:55.211591  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:55.211658  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:55.245961  993585 cri.go:89] found id: ""
	I0120 12:33:55.245993  993585 logs.go:282] 0 containers: []
	W0120 12:33:55.246004  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:55.246019  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:55.246036  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:33:55.297819  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:55.297865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:55.314469  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:55.314514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:55.386489  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:55.386544  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:55.386566  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:55.466897  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:55.466954  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:52.642467  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.143921  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:55.686287  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.185263  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:56.905477  992109 addons.go:514] duration metric: took 2.940174389s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0120 12:33:57.224557  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.224585  992109 pod_ready.go:82] duration metric: took 3.005934718s for pod "coredns-668d6bf9bc-8pf2c" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.224599  992109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.228981  992109 pod_ready.go:93] pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:33:57.228999  992109 pod_ready.go:82] duration metric: took 4.392102ms for pod "coredns-668d6bf9bc-rdj6t" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:57.229007  992109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:59.239998  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:58.014588  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:33:58.032828  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:33:58.032905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:33:58.075631  993585 cri.go:89] found id: ""
	I0120 12:33:58.075671  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.075774  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:33:58.075801  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:33:58.075887  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:33:58.117897  993585 cri.go:89] found id: ""
	I0120 12:33:58.117934  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.117945  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:33:58.117953  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:33:58.118022  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:33:58.161106  993585 cri.go:89] found id: ""
	I0120 12:33:58.161138  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.161149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:33:58.161157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:33:58.161222  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:33:58.203869  993585 cri.go:89] found id: ""
	I0120 12:33:58.203905  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.203915  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:33:58.203923  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:33:58.203991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:33:58.247905  993585 cri.go:89] found id: ""
	I0120 12:33:58.247938  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.247949  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:33:58.247956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:33:58.248016  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:33:58.281395  993585 cri.go:89] found id: ""
	I0120 12:33:58.281426  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.281437  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:33:58.281445  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:33:58.281506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:33:58.318950  993585 cri.go:89] found id: ""
	I0120 12:33:58.318982  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.318991  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:33:58.318996  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:33:58.319055  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:33:58.351052  993585 cri.go:89] found id: ""
	I0120 12:33:58.351080  993585 logs.go:282] 0 containers: []
	W0120 12:33:58.351089  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:33:58.351107  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:33:58.351134  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:33:58.363459  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:33:58.363489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:33:58.427460  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:33:58.427502  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:33:58.427520  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:58.502031  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:33:58.502065  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:33:58.539404  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:33:58.539434  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.093414  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:01.106353  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:01.106422  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:01.145552  993585 cri.go:89] found id: ""
	I0120 12:34:01.145588  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.145601  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:01.145610  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:01.145678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:01.179253  993585 cri.go:89] found id: ""
	I0120 12:34:01.179288  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.179299  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:01.179307  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:01.179374  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:01.215878  993585 cri.go:89] found id: ""
	I0120 12:34:01.215916  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.215928  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:01.215937  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:01.216001  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:01.260751  993585 cri.go:89] found id: ""
	I0120 12:34:01.260783  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.260795  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:01.260807  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:01.260883  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:01.303022  993585 cri.go:89] found id: ""
	I0120 12:34:01.303053  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.303065  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:01.303074  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:01.303145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:01.342483  993585 cri.go:89] found id: ""
	I0120 12:34:01.342539  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.342552  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:01.342562  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:01.342642  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:01.374569  993585 cri.go:89] found id: ""
	I0120 12:34:01.374608  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.374618  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:01.374633  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:01.374696  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:01.406807  993585 cri.go:89] found id: ""
	I0120 12:34:01.406838  993585 logs.go:282] 0 containers: []
	W0120 12:34:01.406848  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:01.406862  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:01.406887  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:01.446081  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:01.446111  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:01.498826  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:01.498865  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:01.512333  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:01.512370  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:01.591631  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:01.591658  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:01.591676  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:33:57.641818  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.141288  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.142885  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:00.685449  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:02.688229  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:01.734840  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:03.790112  992109 pod_ready.go:103] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:04.235638  992109 pod_ready.go:93] pod "etcd-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.235671  992109 pod_ready.go:82] duration metric: took 7.006654161s for pod "etcd-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.235686  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240203  992109 pod_ready.go:93] pod "kube-apiserver-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.240233  992109 pod_ready.go:82] duration metric: took 4.537744ms for pod "kube-apiserver-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.240248  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244405  992109 pod_ready.go:93] pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.244431  992109 pod_ready.go:82] duration metric: took 4.172774ms for pod "kube-controller-manager-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.244445  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248277  992109 pod_ready.go:93] pod "kube-proxy-dpn56" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.248303  992109 pod_ready.go:82] duration metric: took 3.849341ms for pod "kube-proxy-dpn56" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.248315  992109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.251995  992109 pod_ready.go:93] pod "kube-scheduler-no-preload-496524" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:04.252016  992109 pod_ready.go:82] duration metric: took 3.69304ms for pod "kube-scheduler-no-preload-496524" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:04.252025  992109 pod_ready.go:39] duration metric: took 10.041253574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:04.252040  992109 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:04.252101  992109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.288797  992109 api_server.go:72] duration metric: took 10.323505838s to wait for apiserver process to appear ...
	I0120 12:34:04.288829  992109 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:04.288878  992109 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0120 12:34:04.297424  992109 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0120 12:34:04.299152  992109 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:04.299176  992109 api_server.go:131] duration metric: took 10.340981ms to wait for apiserver health ...
	I0120 12:34:04.299188  992109 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:04.437151  992109 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:04.437187  992109 system_pods.go:61] "coredns-668d6bf9bc-8pf2c" [9402090c-afdc-4fd7-a673-155ca87b9afe] Running
	I0120 12:34:04.437194  992109 system_pods.go:61] "coredns-668d6bf9bc-rdj6t" [f7882da6-0b57-402a-a902-6c4e6a8c6cd1] Running
	I0120 12:34:04.437200  992109 system_pods.go:61] "etcd-no-preload-496524" [430610d7-4491-4d35-93d6-71738b1cad0f] Running
	I0120 12:34:04.437205  992109 system_pods.go:61] "kube-apiserver-no-preload-496524" [d028d3c0-5ee8-46cc-b8e5-95f7d07e43ca] Running
	I0120 12:34:04.437210  992109 system_pods.go:61] "kube-controller-manager-no-preload-496524" [b11b36da-c5a3-4fc6-8619-4f12fda64f63] Running
	I0120 12:34:04.437215  992109 system_pods.go:61] "kube-proxy-dpn56" [dbb78c21-4dfb-4a4f-9ca0-ff006da5d4b4] Running
	I0120 12:34:04.437219  992109 system_pods.go:61] "kube-scheduler-no-preload-496524" [80058f6c-526c-487f-82a5-74df5f2e0174] Running
	I0120 12:34:04.437227  992109 system_pods.go:61] "metrics-server-f79f97bbb-dbx78" [c8fb707c-75c2-42b6-802e-52a09222f9ea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:04.437234  992109 system_pods.go:61] "storage-provisioner" [14187f8e-01fd-45ac-a749-82ba272b727f] Running
	I0120 12:34:04.437246  992109 system_pods.go:74] duration metric: took 138.05086ms to wait for pod list to return data ...
	I0120 12:34:04.437257  992109 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:04.636609  992109 default_sa.go:45] found service account: "default"
	I0120 12:34:04.636747  992109 default_sa.go:55] duration metric: took 199.476374ms for default service account to be created ...
	I0120 12:34:04.636770  992109 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:04.836002  992109 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:04.171834  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:04.189904  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:04.189975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:04.227671  993585 cri.go:89] found id: ""
	I0120 12:34:04.227705  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.227717  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:04.227725  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:04.227789  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:04.266288  993585 cri.go:89] found id: ""
	I0120 12:34:04.266319  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.266329  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:04.266337  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:04.266415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:04.303909  993585 cri.go:89] found id: ""
	I0120 12:34:04.303944  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.303952  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:04.303965  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:04.304029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:04.342095  993585 cri.go:89] found id: ""
	I0120 12:34:04.342135  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.342148  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:04.342156  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:04.342220  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:04.374237  993585 cri.go:89] found id: ""
	I0120 12:34:04.374268  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.374290  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:04.374299  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:04.374383  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:04.407930  993585 cri.go:89] found id: ""
	I0120 12:34:04.407962  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.407973  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:04.407981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:04.408047  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:04.444108  993585 cri.go:89] found id: ""
	I0120 12:34:04.444133  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.444140  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:04.444146  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:04.444208  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:04.482725  993585 cri.go:89] found id: ""
	I0120 12:34:04.482759  993585 logs.go:282] 0 containers: []
	W0120 12:34:04.482770  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:04.482783  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:04.482796  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:04.536692  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:04.536732  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:04.549928  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:04.549952  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:04.616622  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:04.616645  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:04.616661  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:04.701813  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:04.701846  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:04.642669  992635 pod_ready.go:103] pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:05.136388  992635 pod_ready.go:82] duration metric: took 4m0.000888072s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:05.136424  992635 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-shgd4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:05.136487  992635 pod_ready.go:39] duration metric: took 4m15.539523942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:05.136548  992635 kubeadm.go:597] duration metric: took 4m23.239372129s to restartPrimaryControlPlane
	W0120 12:34:05.136646  992635 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:05.136701  992635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:05.185480  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.185630  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:09.185867  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:07.245120  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:07.257846  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:07.257917  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:07.293851  993585 cri.go:89] found id: ""
	I0120 12:34:07.293885  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.293898  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:07.293906  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:07.293970  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:07.328532  993585 cri.go:89] found id: ""
	I0120 12:34:07.328568  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.328579  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:07.328588  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:07.328652  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:07.362019  993585 cri.go:89] found id: ""
	I0120 12:34:07.362053  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.362065  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:07.362073  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:07.362136  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:07.394170  993585 cri.go:89] found id: ""
	I0120 12:34:07.394211  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.394223  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:07.394231  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:07.394303  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:07.426650  993585 cri.go:89] found id: ""
	I0120 12:34:07.426694  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.426711  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:07.426719  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:07.426786  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:07.472659  993585 cri.go:89] found id: ""
	I0120 12:34:07.472695  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.472706  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:07.472715  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:07.472788  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:07.506741  993585 cri.go:89] found id: ""
	I0120 12:34:07.506768  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.506777  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:07.506782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:07.506845  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:07.543976  993585 cri.go:89] found id: ""
	I0120 12:34:07.544007  993585 logs.go:282] 0 containers: []
	W0120 12:34:07.544017  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:07.544028  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:07.544039  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:07.618073  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:07.618109  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:07.633284  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:07.633332  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:07.703104  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:07.703134  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:07.703151  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:07.786367  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:07.786404  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:10.324611  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:10.337443  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:10.337513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:10.371387  993585 cri.go:89] found id: ""
	I0120 12:34:10.371421  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.371432  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:10.371489  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:10.371545  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:10.403803  993585 cri.go:89] found id: ""
	I0120 12:34:10.403829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.403837  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:10.403843  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:10.403891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:10.434806  993585 cri.go:89] found id: ""
	I0120 12:34:10.434829  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.434836  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:10.434841  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:10.434897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:10.465821  993585 cri.go:89] found id: ""
	I0120 12:34:10.465849  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.465856  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:10.465861  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:10.465905  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:10.497007  993585 cri.go:89] found id: ""
	I0120 12:34:10.497029  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.497037  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:10.497043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:10.497086  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:10.527026  993585 cri.go:89] found id: ""
	I0120 12:34:10.527050  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.527060  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:10.527069  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:10.527134  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:10.557590  993585 cri.go:89] found id: ""
	I0120 12:34:10.557621  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.557631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:10.557638  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:10.557694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:10.587747  993585 cri.go:89] found id: ""
	I0120 12:34:10.587777  993585 logs.go:282] 0 containers: []
	W0120 12:34:10.587787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:10.587799  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:10.587813  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:10.635855  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:10.635886  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:10.649110  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:10.649147  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:10.719339  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:10.719382  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:10.719399  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:10.791808  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:10.791839  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:11.684329  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.686198  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:13.343317  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:13.356667  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:13.356731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:13.388894  993585 cri.go:89] found id: ""
	I0120 12:34:13.388926  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.388937  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:13.388944  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:13.389013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:13.419319  993585 cri.go:89] found id: ""
	I0120 12:34:13.419350  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.419360  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:13.419374  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:13.419440  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:13.451302  993585 cri.go:89] found id: ""
	I0120 12:34:13.451328  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.451335  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:13.451345  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:13.451398  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:13.485033  993585 cri.go:89] found id: ""
	I0120 12:34:13.485062  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.485073  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:13.485079  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:13.485126  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:13.515362  993585 cri.go:89] found id: ""
	I0120 12:34:13.515392  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.515401  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:13.515410  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:13.515481  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:13.545307  993585 cri.go:89] found id: ""
	I0120 12:34:13.545356  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.545366  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:13.545374  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:13.545436  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:13.575714  993585 cri.go:89] found id: ""
	I0120 12:34:13.575738  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.575745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:13.575751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:13.575805  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:13.606046  993585 cri.go:89] found id: ""
	I0120 12:34:13.606099  993585 logs.go:282] 0 containers: []
	W0120 12:34:13.606112  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:13.606127  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:13.606145  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:13.667543  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:13.667567  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:13.667584  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:13.741766  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:13.741795  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:13.778095  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:13.778131  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:13.830514  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:13.830554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.343728  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:16.356586  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:16.356665  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:16.390098  993585 cri.go:89] found id: ""
	I0120 12:34:16.390132  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.390146  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:16.390155  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:16.390228  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:16.422651  993585 cri.go:89] found id: ""
	I0120 12:34:16.422682  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.422690  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:16.422699  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:16.422755  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:16.455349  993585 cri.go:89] found id: ""
	I0120 12:34:16.455378  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.455390  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:16.455398  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:16.455467  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:16.494862  993585 cri.go:89] found id: ""
	I0120 12:34:16.494893  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.494904  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:16.494911  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:16.494975  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:16.526039  993585 cri.go:89] found id: ""
	I0120 12:34:16.526070  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.526079  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:16.526087  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:16.526159  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:16.557323  993585 cri.go:89] found id: ""
	I0120 12:34:16.557360  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.557376  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:16.557382  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:16.557444  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:16.607483  993585 cri.go:89] found id: ""
	I0120 12:34:16.607516  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.607527  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:16.607535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:16.607600  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:16.639620  993585 cri.go:89] found id: ""
	I0120 12:34:16.639644  993585 logs.go:282] 0 containers: []
	W0120 12:34:16.639654  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:16.639665  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:16.639681  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:16.675471  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:16.675500  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:16.726780  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:16.726814  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:16.739029  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:16.739060  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:16.802705  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:16.802738  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:16.802752  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:16.185205  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:18.685055  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:19.379610  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:19.392739  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:19.392813  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:19.423927  993585 cri.go:89] found id: ""
	I0120 12:34:19.423959  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.423971  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:19.423979  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:19.424049  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:19.455104  993585 cri.go:89] found id: ""
	I0120 12:34:19.455131  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.455140  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:19.455145  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:19.455192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:19.487611  993585 cri.go:89] found id: ""
	I0120 12:34:19.487642  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.487652  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:19.487664  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:19.487728  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:19.517582  993585 cri.go:89] found id: ""
	I0120 12:34:19.517613  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.517638  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:19.517665  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:19.517734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:19.549138  993585 cri.go:89] found id: ""
	I0120 12:34:19.549177  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.549190  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:19.549199  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:19.549263  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:19.584290  993585 cri.go:89] found id: ""
	I0120 12:34:19.584317  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.584328  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:19.584334  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:19.584384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:19.618867  993585 cri.go:89] found id: ""
	I0120 12:34:19.618900  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.618909  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:19.618915  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:19.618967  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:19.651916  993585 cri.go:89] found id: ""
	I0120 12:34:19.651956  993585 logs.go:282] 0 containers: []
	W0120 12:34:19.651968  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:19.651981  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:19.651997  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:19.691207  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:19.691239  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:19.742403  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:19.742436  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:19.755212  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:19.755245  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:19.818642  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:19.818671  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:19.818686  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:21.184740  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:23.685218  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:22.398142  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:22.415423  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:22.415497  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:22.450558  993585 cri.go:89] found id: ""
	I0120 12:34:22.450595  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.450606  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:22.450613  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:22.450672  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:22.481655  993585 cri.go:89] found id: ""
	I0120 12:34:22.481686  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.481697  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:22.481706  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:22.481773  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:22.515465  993585 cri.go:89] found id: ""
	I0120 12:34:22.515498  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.515509  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:22.515516  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:22.515575  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:22.546538  993585 cri.go:89] found id: ""
	I0120 12:34:22.546566  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.546575  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:22.546583  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:22.546640  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:22.577112  993585 cri.go:89] found id: ""
	I0120 12:34:22.577140  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.577151  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:22.577158  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:22.577216  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:22.610604  993585 cri.go:89] found id: ""
	I0120 12:34:22.610640  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.610650  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:22.610657  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:22.610718  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:22.641708  993585 cri.go:89] found id: ""
	I0120 12:34:22.641737  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.641745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:22.641752  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:22.641818  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:22.671952  993585 cri.go:89] found id: ""
	I0120 12:34:22.671977  993585 logs.go:282] 0 containers: []
	W0120 12:34:22.671984  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:22.671994  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:22.672004  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:22.722515  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:22.722552  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:22.734806  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:22.734827  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:22.797517  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:22.797554  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:22.797573  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:22.872821  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:22.872851  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.413129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:25.425926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:25.426021  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:25.462540  993585 cri.go:89] found id: ""
	I0120 12:34:25.462574  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.462584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:25.462595  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:25.462650  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:25.493646  993585 cri.go:89] found id: ""
	I0120 12:34:25.493672  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.493679  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:25.493688  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:25.493732  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:25.529070  993585 cri.go:89] found id: ""
	I0120 12:34:25.529103  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.529126  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:25.529135  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:25.529199  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:25.562199  993585 cri.go:89] found id: ""
	I0120 12:34:25.562225  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.562258  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:25.562265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:25.562329  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:25.597698  993585 cri.go:89] found id: ""
	I0120 12:34:25.597728  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.597739  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:25.597745  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:25.597794  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:25.632923  993585 cri.go:89] found id: ""
	I0120 12:34:25.632950  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.632961  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:25.632968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:25.633031  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:25.664379  993585 cri.go:89] found id: ""
	I0120 12:34:25.664409  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.664419  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:25.664434  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:25.664490  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:25.694965  993585 cri.go:89] found id: ""
	I0120 12:34:25.694992  993585 logs.go:282] 0 containers: []
	W0120 12:34:25.695002  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:25.695014  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:25.695027  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:25.742956  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:25.742987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:25.755095  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:25.755122  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:25.822777  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:25.822807  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:25.822824  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:25.895354  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:25.895389  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:25.685681  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.183977  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:28.433411  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:28.445691  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:28.445750  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:28.475915  993585 cri.go:89] found id: ""
	I0120 12:34:28.475949  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.475961  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:28.475969  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:28.476029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:28.506219  993585 cri.go:89] found id: ""
	I0120 12:34:28.506253  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.506264  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:28.506272  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:28.506332  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:28.539662  993585 cri.go:89] found id: ""
	I0120 12:34:28.539693  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.539704  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:28.539712  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:28.539775  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:28.570360  993585 cri.go:89] found id: ""
	I0120 12:34:28.570390  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.570398  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:28.570404  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:28.570466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:28.599217  993585 cri.go:89] found id: ""
	I0120 12:34:28.599242  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.599249  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:28.599255  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:28.599310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:28.629325  993585 cri.go:89] found id: ""
	I0120 12:34:28.629366  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.629378  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:28.629386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:28.629453  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:28.659625  993585 cri.go:89] found id: ""
	I0120 12:34:28.659657  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.659668  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:28.659675  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:28.659734  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:28.695195  993585 cri.go:89] found id: ""
	I0120 12:34:28.695222  993585 logs.go:282] 0 containers: []
	W0120 12:34:28.695232  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:28.695242  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:28.695255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:28.756910  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:28.756942  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:28.771902  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:28.771932  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:28.859464  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:28.859491  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:28.859510  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:28.931739  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:28.931769  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.472251  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:31.484961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:31.485019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:31.518142  993585 cri.go:89] found id: ""
	I0120 12:34:31.518175  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.518187  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:31.518194  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:31.518241  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:31.550125  993585 cri.go:89] found id: ""
	I0120 12:34:31.550187  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.550201  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:31.550210  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:31.550274  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:31.583805  993585 cri.go:89] found id: ""
	I0120 12:34:31.583834  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.583846  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:31.583854  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:31.583908  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:31.626186  993585 cri.go:89] found id: ""
	I0120 12:34:31.626209  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.626217  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:31.626223  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:31.626271  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:31.657467  993585 cri.go:89] found id: ""
	I0120 12:34:31.657507  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.657519  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:31.657527  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:31.657594  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:31.686983  993585 cri.go:89] found id: ""
	I0120 12:34:31.687008  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.687015  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:31.687021  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:31.687075  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:31.721602  993585 cri.go:89] found id: ""
	I0120 12:34:31.721632  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.721645  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:31.721651  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:31.721701  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:31.751369  993585 cri.go:89] found id: ""
	I0120 12:34:31.751394  993585 logs.go:282] 0 containers: []
	W0120 12:34:31.751401  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:31.751412  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:31.751435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:31.816285  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:31.816327  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:31.816344  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:31.891930  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:31.891969  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:31.927472  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:31.927503  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:32.776819  992635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.640090134s)
	I0120 12:34:32.776911  992635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:34:32.792110  992635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:34:32.801453  992635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:34:32.809836  992635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:34:32.809855  992635 kubeadm.go:157] found existing configuration files:
	
	I0120 12:34:32.809892  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:34:32.817968  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:34:32.818014  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:34:32.826142  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:34:32.834058  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:34:32.834109  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:34:32.842776  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.850601  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:34:32.850645  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:34:32.858854  992635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:34:32.866819  992635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:34:32.866860  992635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:34:32.875193  992635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:34:32.920522  992635 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:34:32.920570  992635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:34:33.023871  992635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:34:33.024001  992635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:34:33.024134  992635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:34:33.032806  992635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:34:33.035443  992635 out.go:235]   - Generating certificates and keys ...
	I0120 12:34:33.035549  992635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:34:33.035644  992635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:34:33.035776  992635 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:34:33.035886  992635 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:34:33.035993  992635 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:34:33.036086  992635 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:34:33.037424  992635 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:34:33.037490  992635 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:34:33.037563  992635 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:34:33.037649  992635 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:34:33.037695  992635 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:34:33.037750  992635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:34:33.105282  992635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:34:33.414668  992635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:34:33.727680  992635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:34:33.812741  992635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:34:33.984459  992635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:34:33.985140  992635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:34:33.988084  992635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:34:30.184978  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:32.185137  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:31.974997  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:31.975024  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.488614  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:34.506548  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:34.506624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:34.563005  993585 cri.go:89] found id: ""
	I0120 12:34:34.563039  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.563052  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:34.563060  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:34.563124  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:34.594244  993585 cri.go:89] found id: ""
	I0120 12:34:34.594284  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.594296  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:34.594304  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:34.594373  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:34.625619  993585 cri.go:89] found id: ""
	I0120 12:34:34.625654  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.625665  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:34.625673  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:34.625738  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:34.658589  993585 cri.go:89] found id: ""
	I0120 12:34:34.658619  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.658627  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:34.658635  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:34.658703  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:34.689254  993585 cri.go:89] found id: ""
	I0120 12:34:34.689283  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.689294  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:34.689301  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:34.689361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:34.718991  993585 cri.go:89] found id: ""
	I0120 12:34:34.719017  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.719025  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:34.719032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:34.719087  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:34.755470  993585 cri.go:89] found id: ""
	I0120 12:34:34.755506  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.755517  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:34.755525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:34.755591  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:34.794468  993585 cri.go:89] found id: ""
	I0120 12:34:34.794511  993585 logs.go:282] 0 containers: []
	W0120 12:34:34.794536  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:34.794550  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:34.794567  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:34.872224  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:34.872255  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:34.906752  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:34.906782  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:34.958387  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:34.958418  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:34.970224  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:34.970247  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:35.042447  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:33.990145  992635 out.go:235]   - Booting up control plane ...
	I0120 12:34:33.990278  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:34:33.990399  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:34:33.990496  992635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:34:34.010394  992635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:34:34.017815  992635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:34:34.017877  992635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:34:34.137419  992635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:34:34.137546  992635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:34:35.139769  992635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002196985s
	I0120 12:34:35.139867  992635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:34:34.685113  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:36.685852  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.185481  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:39.641165  992635 kubeadm.go:310] [api-check] The API server is healthy after 4.501397328s
	I0120 12:34:39.658614  992635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:34:40.171926  992635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:34:40.198719  992635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:34:40.198914  992635 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-987349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:34:40.207929  992635 kubeadm.go:310] [bootstrap-token] Using token: n4uhes.3ig136bhcqw1unce
	I0120 12:34:40.209373  992635 out.go:235]   - Configuring RBAC rules ...
	I0120 12:34:40.209504  992635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:34:40.213198  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:34:40.219884  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:34:40.223154  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:34:40.228539  992635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:34:40.232011  992635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:34:40.369420  992635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:34:40.817626  992635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:34:41.370167  992635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:34:41.371275  992635 kubeadm.go:310] 
	I0120 12:34:41.371411  992635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:34:41.371436  992635 kubeadm.go:310] 
	I0120 12:34:41.371547  992635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:34:41.371567  992635 kubeadm.go:310] 
	I0120 12:34:41.371607  992635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:34:41.371696  992635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:34:41.371776  992635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:34:41.371785  992635 kubeadm.go:310] 
	I0120 12:34:41.371870  992635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:34:41.371879  992635 kubeadm.go:310] 
	I0120 12:34:41.371946  992635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:34:41.371956  992635 kubeadm.go:310] 
	I0120 12:34:41.372030  992635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:34:41.372156  992635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:34:41.372262  992635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:34:41.372278  992635 kubeadm.go:310] 
	I0120 12:34:41.372392  992635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:34:41.372498  992635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:34:41.372507  992635 kubeadm.go:310] 
	I0120 12:34:41.372606  992635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.372783  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:34:41.372829  992635 kubeadm.go:310] 	--control-plane 
	I0120 12:34:41.372852  992635 kubeadm.go:310] 
	I0120 12:34:41.372972  992635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:34:41.372985  992635 kubeadm.go:310] 
	I0120 12:34:41.373076  992635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4uhes.3ig136bhcqw1unce \
	I0120 12:34:41.373204  992635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:34:41.373662  992635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:34:41.373689  992635 cni.go:84] Creating CNI manager for ""
	I0120 12:34:41.373703  992635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:34:41.375374  992635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:34:37.542589  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:37.559095  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:37.559165  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:37.598316  993585 cri.go:89] found id: ""
	I0120 12:34:37.598348  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.598360  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:37.598369  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:37.598438  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:37.628599  993585 cri.go:89] found id: ""
	I0120 12:34:37.628633  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.628645  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:37.628652  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:37.628727  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:37.668373  993585 cri.go:89] found id: ""
	I0120 12:34:37.668415  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.668428  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:37.668436  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:37.668505  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:37.708471  993585 cri.go:89] found id: ""
	I0120 12:34:37.708506  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.708517  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:37.708525  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:37.708586  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:37.741568  993585 cri.go:89] found id: ""
	I0120 12:34:37.741620  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.741639  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:37.741647  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:37.741722  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:37.774368  993585 cri.go:89] found id: ""
	I0120 12:34:37.774396  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.774406  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:37.774414  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:37.774482  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:37.806996  993585 cri.go:89] found id: ""
	I0120 12:34:37.807031  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.807042  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:37.807050  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:37.807111  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:37.843251  993585 cri.go:89] found id: ""
	I0120 12:34:37.843285  993585 logs.go:282] 0 containers: []
	W0120 12:34:37.843296  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:37.843317  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:37.843336  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:37.918915  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:37.918937  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:37.918949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:38.003693  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:38.003735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:38.044200  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:38.044228  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:38.098358  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:38.098396  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.611766  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:40.625430  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:40.625513  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:40.662291  993585 cri.go:89] found id: ""
	I0120 12:34:40.662328  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.662340  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:40.662348  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:40.662416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:40.700505  993585 cri.go:89] found id: ""
	I0120 12:34:40.700535  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.700543  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:40.700549  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:40.700621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:40.740098  993585 cri.go:89] found id: ""
	I0120 12:34:40.740156  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.740168  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:40.740177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:40.740246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:40.779511  993585 cri.go:89] found id: ""
	I0120 12:34:40.779538  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.779547  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:40.779552  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:40.779602  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:40.814466  993585 cri.go:89] found id: ""
	I0120 12:34:40.814508  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.814539  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:40.814549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:40.814624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:40.848198  993585 cri.go:89] found id: ""
	I0120 12:34:40.848224  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.848233  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:40.848239  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:40.848295  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:40.881226  993585 cri.go:89] found id: ""
	I0120 12:34:40.881260  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.881273  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:40.881281  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:40.881345  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:40.914605  993585 cri.go:89] found id: ""
	I0120 12:34:40.914639  993585 logs.go:282] 0 containers: []
	W0120 12:34:40.914649  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:40.914659  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:40.914671  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:40.967363  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:40.967401  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:40.981622  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:40.981655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:41.052041  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:41.052074  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:41.052089  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:41.136661  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:41.136699  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:41.376667  992635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:34:41.387591  992635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:34:41.405656  992635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:34:41.405748  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.405779  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-987349 minikube.k8s.io/updated_at=2025_01_20T12_34_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-987349 minikube.k8s.io/primary=true
	I0120 12:34:41.445579  992635 ops.go:34] apiserver oom_adj: -16
	I0120 12:34:41.593723  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:42.093899  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:41.685860  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:43.685895  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:42.593991  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.093847  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:43.594692  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.094458  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:44.594425  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.094074  992635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:34:45.201304  992635 kubeadm.go:1113] duration metric: took 3.795623962s to wait for elevateKubeSystemPrivileges
	I0120 12:34:45.201350  992635 kubeadm.go:394] duration metric: took 5m3.346037476s to StartCluster
	I0120 12:34:45.201376  992635 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.201474  992635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:34:45.204831  992635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:34:45.205103  992635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:34:45.205287  992635 config.go:182] Loaded profile config "embed-certs-987349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:34:45.205236  992635 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:34:45.205342  992635 addons.go:69] Setting dashboard=true in profile "embed-certs-987349"
	I0120 12:34:45.205370  992635 addons.go:238] Setting addon dashboard=true in "embed-certs-987349"
	I0120 12:34:45.205355  992635 addons.go:69] Setting default-storageclass=true in profile "embed-certs-987349"
	I0120 12:34:45.205338  992635 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-987349"
	I0120 12:34:45.205375  992635 addons.go:69] Setting metrics-server=true in profile "embed-certs-987349"
	I0120 12:34:45.205395  992635 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-987349"
	W0120 12:34:45.205403  992635 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:34:45.205413  992635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-987349"
	I0120 12:34:45.205443  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205383  992635 addons.go:247] addon dashboard should already be in state true
	I0120 12:34:45.205402  992635 addons.go:238] Setting addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:45.205522  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	W0120 12:34:45.205537  992635 addons.go:247] addon metrics-server should already be in state true
	I0120 12:34:45.205585  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.205843  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205869  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205889  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205900  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205939  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.205984  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.205987  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.206010  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.206677  992635 out.go:177] * Verifying Kubernetes components...
	I0120 12:34:45.208137  992635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:34:45.222507  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0120 12:34:45.222862  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0120 12:34:45.223151  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.223444  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0120 12:34:45.223795  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.223818  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.223841  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.224249  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224372  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.224394  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.224716  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0120 12:34:45.224739  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.224840  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.224881  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225063  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225306  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.225342  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.225362  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225827  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.225864  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.225848  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.226299  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226361  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.226579  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.226996  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.227044  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.230457  992635 addons.go:238] Setting addon default-storageclass=true in "embed-certs-987349"
	W0120 12:34:45.230485  992635 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:34:45.230516  992635 host.go:66] Checking if "embed-certs-987349" exists ...
	I0120 12:34:45.230928  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.230994  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.245536  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0120 12:34:45.246137  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.246774  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.246800  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.246874  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0120 12:34:45.247488  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.247514  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247491  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0120 12:34:45.247884  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.247991  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.248377  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248398  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.248650  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.248676  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.249046  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249050  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.249260  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.249453  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.250058  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.250219  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0120 12:34:45.250876  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.251417  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.251442  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.251975  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.252485  992635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:34:45.252527  992635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:34:45.252582  992635 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:34:45.252806  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253386  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.253969  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:34:45.253998  992635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:34:45.254019  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.254034  992635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:34:45.254933  992635 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:34:45.255880  992635 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.255900  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:34:45.255918  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.258271  992635 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:34:43.674682  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:43.690652  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:43.690723  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:43.721291  993585 cri.go:89] found id: ""
	I0120 12:34:43.721323  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.721334  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:43.721342  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:43.721410  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:43.752041  993585 cri.go:89] found id: ""
	I0120 12:34:43.752065  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.752072  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:43.752078  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:43.752138  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:43.785868  993585 cri.go:89] found id: ""
	I0120 12:34:43.785901  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.785913  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:43.785920  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:43.785989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:43.815950  993585 cri.go:89] found id: ""
	I0120 12:34:43.815981  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.815991  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:43.815998  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:43.816051  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:43.846957  993585 cri.go:89] found id: ""
	I0120 12:34:43.846989  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.846998  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:43.847006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:43.847063  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:43.879933  993585 cri.go:89] found id: ""
	I0120 12:34:43.879961  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.879971  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:43.879979  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:43.880037  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:43.910895  993585 cri.go:89] found id: ""
	I0120 12:34:43.910922  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.910932  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:43.910940  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:43.911004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:43.940052  993585 cri.go:89] found id: ""
	I0120 12:34:43.940083  993585 logs.go:282] 0 containers: []
	W0120 12:34:43.940092  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:43.940103  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:43.940119  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:43.992764  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:43.992797  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:44.004467  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:44.004489  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:44.076395  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:44.076424  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:44.076440  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:44.155006  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:44.155051  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:46.706685  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:46.720910  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:46.720986  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:46.769398  993585 cri.go:89] found id: ""
	I0120 12:34:46.769438  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.769452  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:46.769461  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:46.769532  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:46.812658  993585 cri.go:89] found id: ""
	I0120 12:34:46.812692  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.812704  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:46.812712  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:46.812780  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:46.849224  993585 cri.go:89] found id: ""
	I0120 12:34:46.849260  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.849271  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:46.849278  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:46.849340  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:46.880621  993585 cri.go:89] found id: ""
	I0120 12:34:46.880660  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.880672  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:46.880680  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:46.880754  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:46.917825  993585 cri.go:89] found id: ""
	I0120 12:34:46.917860  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.917872  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:46.917880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:46.917948  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:46.953069  993585 cri.go:89] found id: ""
	I0120 12:34:46.953102  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.953114  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:46.953122  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:46.953210  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:45.258378  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.258973  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.259074  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.259447  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.259546  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:34:45.259555  992635 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:34:45.259566  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.259650  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.260023  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.260165  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.260401  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.260819  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.260837  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.261018  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.261123  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.261371  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.261498  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.263039  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263451  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.263466  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.263718  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.263876  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.264027  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.264247  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.271639  992635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0120 12:34:45.272049  992635 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:34:45.272492  992635 main.go:141] libmachine: Using API Version  1
	I0120 12:34:45.272506  992635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:34:45.272861  992635 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:34:45.273045  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetState
	I0120 12:34:45.275220  992635 main.go:141] libmachine: (embed-certs-987349) Calling .DriverName
	I0120 12:34:45.275411  992635 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.275425  992635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:34:45.275441  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHHostname
	I0120 12:34:45.278031  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278264  992635 main.go:141] libmachine: (embed-certs-987349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:72:25", ip: ""} in network mk-embed-certs-987349: {Iface:virbr4 ExpiryTime:2025-01-20 13:29:28 +0000 UTC Type:0 Mac:52:54:00:17:72:25 Iaid: IPaddr:192.168.72.170 Prefix:24 Hostname:embed-certs-987349 Clientid:01:52:54:00:17:72:25}
	I0120 12:34:45.278284  992635 main.go:141] libmachine: (embed-certs-987349) DBG | domain embed-certs-987349 has defined IP address 192.168.72.170 and MAC address 52:54:00:17:72:25 in network mk-embed-certs-987349
	I0120 12:34:45.278459  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHPort
	I0120 12:34:45.278651  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHKeyPath
	I0120 12:34:45.278797  992635 main.go:141] libmachine: (embed-certs-987349) Calling .GetSSHUsername
	I0120 12:34:45.278940  992635 sshutil.go:53] new ssh client: &{IP:192.168.72.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/embed-certs-987349/id_rsa Username:docker}
	I0120 12:34:45.485223  992635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:34:45.512129  992635 node_ready.go:35] waiting up to 6m0s for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535766  992635 node_ready.go:49] node "embed-certs-987349" has status "Ready":"True"
	I0120 12:34:45.535800  992635 node_ready.go:38] duration metric: took 23.637811ms for node "embed-certs-987349" to be "Ready" ...
	I0120 12:34:45.535816  992635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:45.546936  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:45.591884  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:34:45.672669  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:34:45.672696  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:34:45.706505  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:34:45.706552  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:34:45.719651  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:34:45.719685  992635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:34:45.797607  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:34:45.912193  992635 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.912228  992635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:34:45.919037  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:34:45.919066  992635 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:34:45.995504  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:34:45.999745  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:34:45.999769  992635 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:34:46.012312  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012340  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.012774  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.012805  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.012815  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.012824  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.013169  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.013179  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.013190  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.039766  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:46.039787  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:46.040079  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:46.040141  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:46.040161  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:46.060472  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:34:46.060499  992635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:34:46.125182  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:34:46.125209  992635 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:34:46.163864  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:34:46.163897  992635 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:34:46.271512  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:34:46.271542  992635 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:34:46.315589  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:34:46.315615  992635 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:34:46.382800  992635 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:46.382834  992635 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:34:46.471318  992635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:34:47.146418  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.348766384s)
	I0120 12:34:47.146477  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146493  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.146889  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.146910  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.146920  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.146928  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.148865  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.148875  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.148885  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375249  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.379691916s)
	I0120 12:34:47.375330  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375349  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375787  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.375817  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.375827  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:47.375835  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:47.375855  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:47.376085  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:47.376105  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:47.376121  992635 addons.go:479] Verifying addon metrics-server=true in "embed-certs-987349"
	I0120 12:34:47.554735  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.098046  992635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626653683s)
	I0120 12:34:48.098124  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098144  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098568  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098628  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.098648  992635 main.go:141] libmachine: Making call to close driver server
	I0120 12:34:48.098651  992635 main.go:141] libmachine: (embed-certs-987349) DBG | Closing plugin on server side
	I0120 12:34:48.098663  992635 main.go:141] libmachine: (embed-certs-987349) Calling .Close
	I0120 12:34:48.098945  992635 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:34:48.098958  992635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:34:48.100362  992635 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-987349 addons enable metrics-server
	
	I0120 12:34:48.101744  992635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:34:46.185138  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:48.185173  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:46.991590  993585 cri.go:89] found id: ""
	I0120 12:34:46.991624  993585 logs.go:282] 0 containers: []
	W0120 12:34:46.991636  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:46.991643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:46.991709  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:47.026992  993585 cri.go:89] found id: ""
	I0120 12:34:47.027028  993585 logs.go:282] 0 containers: []
	W0120 12:34:47.027039  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:47.027052  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:47.027070  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:47.041560  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:47.041600  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:47.116950  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:47.116982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:47.116999  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:47.220147  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:47.220186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:47.261692  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:47.261735  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:49.823725  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:49.837812  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:49.837891  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:49.870910  993585 cri.go:89] found id: ""
	I0120 12:34:49.870942  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.870954  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:49.870974  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:49.871038  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:49.901938  993585 cri.go:89] found id: ""
	I0120 12:34:49.901971  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.901983  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:49.901991  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:49.902050  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:49.934859  993585 cri.go:89] found id: ""
	I0120 12:34:49.934895  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.934908  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:49.934916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:49.934978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:49.969109  993585 cri.go:89] found id: ""
	I0120 12:34:49.969144  993585 logs.go:282] 0 containers: []
	W0120 12:34:49.969152  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:49.969159  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:49.969215  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:50.000593  993585 cri.go:89] found id: ""
	I0120 12:34:50.000624  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.000634  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:50.000644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:50.000704  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:50.031935  993585 cri.go:89] found id: ""
	I0120 12:34:50.031956  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.031963  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:50.031968  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:50.032013  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:50.066876  993585 cri.go:89] found id: ""
	I0120 12:34:50.066904  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.066914  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:50.066922  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:50.066980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:50.099413  993585 cri.go:89] found id: ""
	I0120 12:34:50.099440  993585 logs.go:282] 0 containers: []
	W0120 12:34:50.099448  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:50.099458  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:50.099469  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:50.147538  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:50.147565  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:50.159202  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:50.159227  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:50.233169  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:50.233201  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:50.233218  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:50.313297  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:50.313331  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:48.102973  992635 addons.go:514] duration metric: took 2.897750546s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:34:50.054643  992635 pod_ready.go:103] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:50.685136  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:53.185766  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:52.849232  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:52.863600  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:52.863668  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:52.897114  993585 cri.go:89] found id: ""
	I0120 12:34:52.897146  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.897158  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:52.897166  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:52.897235  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:52.931572  993585 cri.go:89] found id: ""
	I0120 12:34:52.931608  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.931621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:52.931631  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:52.931699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:52.967427  993585 cri.go:89] found id: ""
	I0120 12:34:52.967464  993585 logs.go:282] 0 containers: []
	W0120 12:34:52.967477  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:52.967485  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:52.967550  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:53.004996  993585 cri.go:89] found id: ""
	I0120 12:34:53.005036  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.005045  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:53.005052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:53.005130  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:53.042883  993585 cri.go:89] found id: ""
	I0120 12:34:53.042920  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.042932  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:53.042941  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:53.043012  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:53.081504  993585 cri.go:89] found id: ""
	I0120 12:34:53.081548  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.081560  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:53.081569  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:53.081638  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:53.116486  993585 cri.go:89] found id: ""
	I0120 12:34:53.116526  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.116537  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:53.116546  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:53.116621  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:53.150011  993585 cri.go:89] found id: ""
	I0120 12:34:53.150044  993585 logs.go:282] 0 containers: []
	W0120 12:34:53.150055  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:53.150068  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:53.150082  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:53.236271  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:53.236314  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:53.272793  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:53.272823  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:53.328164  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:53.328210  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:53.342124  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:53.342159  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:53.436951  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:55.938662  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:55.954006  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:55.954080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:55.995805  993585 cri.go:89] found id: ""
	I0120 12:34:55.995836  993585 logs.go:282] 0 containers: []
	W0120 12:34:55.995847  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:55.995855  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:55.995922  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:56.037391  993585 cri.go:89] found id: ""
	I0120 12:34:56.037422  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.037431  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:56.037440  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:56.037500  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:56.073395  993585 cri.go:89] found id: ""
	I0120 12:34:56.073432  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.073444  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:56.073452  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:56.073521  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:56.113060  993585 cri.go:89] found id: ""
	I0120 12:34:56.113095  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.113106  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:56.113114  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:56.113192  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:56.149448  993585 cri.go:89] found id: ""
	I0120 12:34:56.149481  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.149492  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:56.149501  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:56.149565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:56.188193  993585 cri.go:89] found id: ""
	I0120 12:34:56.188222  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.188232  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:56.188241  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:56.188305  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:56.229490  993585 cri.go:89] found id: ""
	I0120 12:34:56.229520  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.229530  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:56.229538  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:56.229596  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:56.268312  993585 cri.go:89] found id: ""
	I0120 12:34:56.268342  993585 logs.go:282] 0 containers: []
	W0120 12:34:56.268355  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:56.268368  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:56.268382  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:56.362946  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:56.362970  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:56.362987  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:56.449009  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:56.449049  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:34:56.497349  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:56.497393  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:56.552829  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:56.552864  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:52.555092  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.555118  992635 pod_ready.go:82] duration metric: took 7.008153036s for pod "coredns-668d6bf9bc-cf5ts" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.555129  992635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559701  992635 pod_ready.go:93] pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.559730  992635 pod_ready.go:82] duration metric: took 4.593756ms for pod "coredns-668d6bf9bc-gr6pw" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.559743  992635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564650  992635 pod_ready.go:93] pod "etcd-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.564677  992635 pod_ready.go:82] duration metric: took 4.924851ms for pod "etcd-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.564690  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568924  992635 pod_ready.go:93] pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.568947  992635 pod_ready.go:82] duration metric: took 4.248574ms for pod "kube-apiserver-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.568959  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573555  992635 pod_ready.go:93] pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.573574  992635 pod_ready.go:82] duration metric: took 4.607213ms for pod "kube-controller-manager-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.573582  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951750  992635 pod_ready.go:93] pod "kube-proxy-xrg5x" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:52.951777  992635 pod_ready.go:82] duration metric: took 378.189084ms for pod "kube-proxy-xrg5x" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:52.951787  992635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352358  992635 pod_ready.go:93] pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace has status "Ready":"True"
	I0120 12:34:53.352397  992635 pod_ready.go:82] duration metric: took 400.600706ms for pod "kube-scheduler-embed-certs-987349" in "kube-system" namespace to be "Ready" ...
	I0120 12:34:53.352410  992635 pod_ready.go:39] duration metric: took 7.816579945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:53.352431  992635 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:34:53.352497  992635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:53.385445  992635 api_server.go:72] duration metric: took 8.18029522s to wait for apiserver process to appear ...
	I0120 12:34:53.385483  992635 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:34:53.385512  992635 api_server.go:253] Checking apiserver healthz at https://192.168.72.170:8443/healthz ...
	I0120 12:34:53.390273  992635 api_server.go:279] https://192.168.72.170:8443/healthz returned 200:
	ok
	I0120 12:34:53.391546  992635 api_server.go:141] control plane version: v1.32.0
	I0120 12:34:53.391569  992635 api_server.go:131] duration metric: took 6.078483ms to wait for apiserver health ...
	I0120 12:34:53.391576  992635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:34:53.555192  992635 system_pods.go:59] 9 kube-system pods found
	I0120 12:34:53.555222  992635 system_pods.go:61] "coredns-668d6bf9bc-cf5ts" [91648c6f-7cef-427f-82f3-7572a9b5d80e] Running
	I0120 12:34:53.555227  992635 system_pods.go:61] "coredns-668d6bf9bc-gr6pw" [6ff16a87-0a5e-4d82-b13d-2c72afef6dc0] Running
	I0120 12:34:53.555231  992635 system_pods.go:61] "etcd-embed-certs-987349" [5a54b1fe-f8d1-43c6-a430-a37fa3fa04b7] Running
	I0120 12:34:53.555235  992635 system_pods.go:61] "kube-apiserver-embed-certs-987349" [3e1da80d-0a1d-44bb-945d-534b91eebb95] Running
	I0120 12:34:53.555241  992635 system_pods.go:61] "kube-controller-manager-embed-certs-987349" [e1f4800a-ff08-4ea5-8134-81130f2d8f3d] Running
	I0120 12:34:53.555245  992635 system_pods.go:61] "kube-proxy-xrg5x" [a76bebb9-1eed-46fb-9f3a-d3dc1a5930c7] Running
	I0120 12:34:53.555248  992635 system_pods.go:61] "kube-scheduler-embed-certs-987349" [d35e4dae-055f-4db7-b807-5767fa324498] Running
	I0120 12:34:53.555257  992635 system_pods.go:61] "metrics-server-f79f97bbb-4vcgc" [2108ac96-d8cd-429f-ac2d-babc6d97267b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:34:53.555262  992635 system_pods.go:61] "storage-provisioner" [953b33a8-d2a0-447d-a01b-49350c6555f7] Running
	I0120 12:34:53.555270  992635 system_pods.go:74] duration metric: took 163.687709ms to wait for pod list to return data ...
	I0120 12:34:53.555281  992635 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:34:53.753014  992635 default_sa.go:45] found service account: "default"
	I0120 12:34:53.753053  992635 default_sa.go:55] duration metric: took 197.764358ms for default service account to be created ...
	I0120 12:34:53.753066  992635 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:34:53.953127  992635 system_pods.go:87] 9 kube-system pods found
	I0120 12:34:55.685957  993131 pod_ready.go:103] pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace has status "Ready":"False"
	I0120 12:34:57.679747  993131 pod_ready.go:82] duration metric: took 4m0.000931966s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" ...
	E0120 12:34:57.679804  993131 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hb6dm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 12:34:57.679835  993131 pod_ready.go:39] duration metric: took 4m14.541139208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:34:57.679882  993131 kubeadm.go:597] duration metric: took 4m22.782450691s to restartPrimaryControlPlane
	W0120 12:34:57.679976  993131 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:34:57.680017  993131 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:34:59.068750  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:34:59.085643  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:34:59.085720  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:34:59.128466  993585 cri.go:89] found id: ""
	I0120 12:34:59.128566  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.128584  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:34:59.128594  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:34:59.128677  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:34:59.175838  993585 cri.go:89] found id: ""
	I0120 12:34:59.175873  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.175885  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:34:59.175893  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:34:59.175961  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:34:59.211334  993585 cri.go:89] found id: ""
	I0120 12:34:59.211371  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.211383  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:34:59.211392  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:34:59.211466  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:34:59.248992  993585 cri.go:89] found id: ""
	I0120 12:34:59.249031  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.249043  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:34:59.249060  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:34:59.249127  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:34:59.285229  993585 cri.go:89] found id: ""
	I0120 12:34:59.285266  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.285279  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:34:59.285288  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:34:59.285367  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:34:59.323049  993585 cri.go:89] found id: ""
	I0120 12:34:59.323081  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.323092  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:34:59.323099  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:34:59.323180  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:34:59.365925  993585 cri.go:89] found id: ""
	I0120 12:34:59.365968  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.365978  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:34:59.365985  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:34:59.366060  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:34:59.406489  993585 cri.go:89] found id: ""
	I0120 12:34:59.406540  993585 logs.go:282] 0 containers: []
	W0120 12:34:59.406553  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:34:59.406565  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:34:59.406579  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:34:59.477858  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:34:59.477896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:34:59.494617  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:34:59.494658  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:34:59.572132  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:34:59.572160  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:34:59.572178  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:34:59.668424  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:34:59.668471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:02.212721  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:02.227926  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:02.228019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:02.266386  993585 cri.go:89] found id: ""
	I0120 12:35:02.266431  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.266444  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:02.266454  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:02.266541  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:02.301567  993585 cri.go:89] found id: ""
	I0120 12:35:02.301595  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.301607  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:02.301615  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:02.301678  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:02.338717  993585 cri.go:89] found id: ""
	I0120 12:35:02.338758  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.338770  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:02.338778  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:02.338847  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:02.373953  993585 cri.go:89] found id: ""
	I0120 12:35:02.373990  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.374004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:02.374014  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:02.374113  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:02.406791  993585 cri.go:89] found id: ""
	I0120 12:35:02.406828  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.406839  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:02.406845  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:02.406897  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:02.443578  993585 cri.go:89] found id: ""
	I0120 12:35:02.443609  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.443617  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:02.443626  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:02.443676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:02.477334  993585 cri.go:89] found id: ""
	I0120 12:35:02.477374  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.477387  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:02.477395  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:02.477468  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:02.511320  993585 cri.go:89] found id: ""
	I0120 12:35:02.511347  993585 logs.go:282] 0 containers: []
	W0120 12:35:02.511357  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:02.511368  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:02.511379  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:02.563616  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:02.563655  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:02.589388  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:02.589428  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:02.668649  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:02.668676  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:02.668690  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:02.754754  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:02.754788  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:05.298701  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:05.312912  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:05.312991  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:05.345040  993585 cri.go:89] found id: ""
	I0120 12:35:05.345073  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.345082  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:05.345095  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:05.345166  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:05.378693  993585 cri.go:89] found id: ""
	I0120 12:35:05.378728  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.378739  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:05.378747  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:05.378802  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:05.411600  993585 cri.go:89] found id: ""
	I0120 12:35:05.411628  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.411636  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:05.411642  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:05.411693  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:05.444416  993585 cri.go:89] found id: ""
	I0120 12:35:05.444445  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.444453  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:05.444461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:05.444525  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:05.475125  993585 cri.go:89] found id: ""
	I0120 12:35:05.475158  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.475171  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:05.475177  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:05.475246  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:05.508163  993585 cri.go:89] found id: ""
	I0120 12:35:05.508194  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.508207  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:05.508215  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:05.508278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:05.543703  993585 cri.go:89] found id: ""
	I0120 12:35:05.543737  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.543745  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:05.543751  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:05.543819  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:05.579560  993585 cri.go:89] found id: ""
	I0120 12:35:05.579594  993585 logs.go:282] 0 containers: []
	W0120 12:35:05.579606  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:05.579620  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:05.579634  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:05.632935  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:05.632986  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:05.645983  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:05.646012  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:05.719551  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:05.719582  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:05.719599  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:05.799242  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:05.799283  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.344816  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:08.358927  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:08.359006  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:08.393237  993585 cri.go:89] found id: ""
	I0120 12:35:08.393265  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.393274  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:08.393280  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:08.393333  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:08.432032  993585 cri.go:89] found id: ""
	I0120 12:35:08.432061  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.432069  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:08.432077  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:08.432155  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:08.465329  993585 cri.go:89] found id: ""
	I0120 12:35:08.465357  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.465366  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:08.465375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:08.465450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:08.498889  993585 cri.go:89] found id: ""
	I0120 12:35:08.498932  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.498944  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:08.498952  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:08.499034  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:08.533799  993585 cri.go:89] found id: ""
	I0120 12:35:08.533827  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.533836  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:08.533842  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:08.533898  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:08.569072  993585 cri.go:89] found id: ""
	I0120 12:35:08.569109  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.569121  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:08.569129  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:08.569190  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:08.602775  993585 cri.go:89] found id: ""
	I0120 12:35:08.602815  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.602828  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:08.602836  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:08.602899  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:08.637207  993585 cri.go:89] found id: ""
	I0120 12:35:08.637242  993585 logs.go:282] 0 containers: []
	W0120 12:35:08.637253  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:08.637266  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:08.637281  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:08.650046  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:08.650077  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:08.717640  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:08.717668  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:08.717682  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:08.795565  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:08.795605  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:08.832910  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:08.832951  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.391198  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:11.404454  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:11.404548  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:11.438901  993585 cri.go:89] found id: ""
	I0120 12:35:11.438942  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.438951  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:11.438959  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:11.439028  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:11.475199  993585 cri.go:89] found id: ""
	I0120 12:35:11.475228  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.475237  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:11.475243  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:11.475304  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:11.507984  993585 cri.go:89] found id: ""
	I0120 12:35:11.508029  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.508041  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:11.508052  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:11.508145  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:11.544131  993585 cri.go:89] found id: ""
	I0120 12:35:11.544162  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.544170  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:11.544176  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:11.544229  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:11.585316  993585 cri.go:89] found id: ""
	I0120 12:35:11.585353  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.585364  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:11.585370  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:11.585424  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:11.621531  993585 cri.go:89] found id: ""
	I0120 12:35:11.621565  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.621578  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:11.621587  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:11.621644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:11.653882  993585 cri.go:89] found id: ""
	I0120 12:35:11.653915  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.653926  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:11.653935  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:11.654005  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:11.686715  993585 cri.go:89] found id: ""
	I0120 12:35:11.686751  993585 logs.go:282] 0 containers: []
	W0120 12:35:11.686763  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:11.686777  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:11.686792  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:11.766495  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:11.766550  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:11.805907  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:11.805944  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:11.854399  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:11.854435  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:11.867131  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:11.867168  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:11.930826  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.431154  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:14.444170  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:14.444252  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:14.478030  993585 cri.go:89] found id: ""
	I0120 12:35:14.478067  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.478077  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:14.478083  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:14.478148  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:14.510821  993585 cri.go:89] found id: ""
	I0120 12:35:14.510855  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.510867  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:14.510874  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:14.510942  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:14.543080  993585 cri.go:89] found id: ""
	I0120 12:35:14.543136  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.543149  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:14.543157  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:14.543214  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:14.579258  993585 cri.go:89] found id: ""
	I0120 12:35:14.579293  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.579302  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:14.579308  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:14.579361  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:14.617149  993585 cri.go:89] found id: ""
	I0120 12:35:14.617187  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.617198  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:14.617206  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:14.617278  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:14.650716  993585 cri.go:89] found id: ""
	I0120 12:35:14.650754  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.650793  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:14.650803  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:14.650874  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:14.685987  993585 cri.go:89] found id: ""
	I0120 12:35:14.686018  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.686026  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:14.686032  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:14.686084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:14.736332  993585 cri.go:89] found id: ""
	I0120 12:35:14.736370  993585 logs.go:282] 0 containers: []
	W0120 12:35:14.736378  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:14.736389  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:14.736406  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:14.789693  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:14.789734  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:14.818344  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:14.818376  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:14.891944  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:14.891974  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:14.891990  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:14.969846  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:14.969888  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:17.512148  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:17.525055  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:17.525143  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:17.559502  993585 cri.go:89] found id: ""
	I0120 12:35:17.559539  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.559550  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:17.559563  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:17.559624  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:17.596133  993585 cri.go:89] found id: ""
	I0120 12:35:17.596170  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.596182  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:17.596190  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:17.596258  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:17.632458  993585 cri.go:89] found id: ""
	I0120 12:35:17.632511  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.632526  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:17.632535  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:17.632614  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:17.666860  993585 cri.go:89] found id: ""
	I0120 12:35:17.666891  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.666899  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:17.666905  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:17.666959  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:17.701282  993585 cri.go:89] found id: ""
	I0120 12:35:17.701309  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.701318  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:17.701325  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:17.701384  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:17.733358  993585 cri.go:89] found id: ""
	I0120 12:35:17.733391  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.733399  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:17.733406  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:17.733460  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:17.769630  993585 cri.go:89] found id: ""
	I0120 12:35:17.769661  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.769670  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:17.769677  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:17.769731  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:17.801855  993585 cri.go:89] found id: ""
	I0120 12:35:17.801894  993585 logs.go:282] 0 containers: []
	W0120 12:35:17.801906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:17.801920  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:17.801935  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:17.852827  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:17.852869  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:17.866559  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:17.866589  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:17.937036  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:17.937058  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:17.937078  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:18.011449  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:18.011482  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.551859  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:20.564461  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:20.564522  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:20.599674  993585 cri.go:89] found id: ""
	I0120 12:35:20.599700  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.599708  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:20.599713  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:20.599761  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:20.634303  993585 cri.go:89] found id: ""
	I0120 12:35:20.634330  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.634340  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:20.634347  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:20.634395  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:20.670501  993585 cri.go:89] found id: ""
	I0120 12:35:20.670552  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.670562  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:20.670568  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:20.670635  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:20.703603  993585 cri.go:89] found id: ""
	I0120 12:35:20.703627  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.703636  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:20.703644  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:20.703699  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:20.733456  993585 cri.go:89] found id: ""
	I0120 12:35:20.733490  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.733501  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:20.733509  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:20.733565  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:20.764504  993585 cri.go:89] found id: ""
	I0120 12:35:20.764529  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.764539  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:20.764547  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:20.764608  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:20.796510  993585 cri.go:89] found id: ""
	I0120 12:35:20.796543  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.796553  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:20.796560  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:20.796623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:20.828114  993585 cri.go:89] found id: ""
	I0120 12:35:20.828147  993585 logs.go:282] 0 containers: []
	W0120 12:35:20.828158  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:20.828170  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:20.828189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:20.889902  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:20.889933  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:20.889949  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:20.962443  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:20.962471  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:20.999767  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:20.999798  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:21.050810  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:21.050837  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.565446  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:23.577843  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:23.577912  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:23.612669  993585 cri.go:89] found id: ""
	I0120 12:35:23.612699  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.612710  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:23.612719  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:23.612787  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:23.646750  993585 cri.go:89] found id: ""
	I0120 12:35:23.646783  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.646793  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:23.646799  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:23.646853  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:23.679879  993585 cri.go:89] found id: ""
	I0120 12:35:23.679907  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.679917  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:23.679925  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:23.679989  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:23.713255  993585 cri.go:89] found id: ""
	I0120 12:35:23.713292  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.713301  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:23.713307  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:23.713358  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:23.742940  993585 cri.go:89] found id: ""
	I0120 12:35:23.742966  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.742974  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:23.742980  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:23.743029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:23.771816  993585 cri.go:89] found id: ""
	I0120 12:35:23.771846  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.771865  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:23.771871  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:23.771937  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:23.801508  993585 cri.go:89] found id: ""
	I0120 12:35:23.801536  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.801544  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:23.801549  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:23.801606  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:23.830867  993585 cri.go:89] found id: ""
	I0120 12:35:23.830897  993585 logs.go:282] 0 containers: []
	W0120 12:35:23.830906  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:23.830918  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:23.830934  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:23.882650  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:23.882678  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:23.895231  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:23.895260  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:23.959418  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:23.959446  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:23.959461  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:24.036771  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:24.036802  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:26.577129  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:26.594999  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:26.595084  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:26.627078  993585 cri.go:89] found id: ""
	I0120 12:35:26.627114  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.627123  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:26.627129  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:26.627184  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:26.667285  993585 cri.go:89] found id: ""
	I0120 12:35:26.667317  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.667333  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:26.667340  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:26.667416  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:26.704185  993585 cri.go:89] found id: ""
	I0120 12:35:26.704216  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.704227  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:26.704235  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:26.704296  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:26.738047  993585 cri.go:89] found id: ""
	I0120 12:35:26.738082  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.738108  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:26.738117  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:26.738183  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:26.768751  993585 cri.go:89] found id: ""
	I0120 12:35:26.768783  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.768794  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:26.768801  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:26.768865  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:26.799890  993585 cri.go:89] found id: ""
	I0120 12:35:26.799916  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.799924  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:26.799930  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:26.799980  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:26.831879  993585 cri.go:89] found id: ""
	I0120 12:35:26.831910  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.831921  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:26.831929  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:26.831987  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:26.869231  993585 cri.go:89] found id: ""
	I0120 12:35:26.869264  993585 logs.go:282] 0 containers: []
	W0120 12:35:26.869272  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:26.869282  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:26.869294  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:26.929958  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:26.929982  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:26.929996  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:25.897831  993131 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.217725548s)
	I0120 12:35:25.897928  993131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:25.911960  993131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:25.920888  993131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:25.929485  993131 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:25.929507  993131 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:25.929555  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 12:35:25.937714  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:25.937770  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:25.946009  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 12:35:25.954472  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:25.954515  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:25.962622  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.970420  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:25.970466  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:25.978489  993131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 12:35:25.986579  993131 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:25.986631  993131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:25.994935  993131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:26.145798  993131 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:35:27.025154  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:27.025189  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:27.073288  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:27.073333  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:27.124126  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:27.124156  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:29.638666  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:29.652209  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:29.652286  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:29.690747  993585 cri.go:89] found id: ""
	I0120 12:35:29.690777  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.690789  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:29.690796  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:29.690857  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:29.721866  993585 cri.go:89] found id: ""
	I0120 12:35:29.721896  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.721907  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:29.721915  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:29.721978  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:29.757564  993585 cri.go:89] found id: ""
	I0120 12:35:29.757596  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.757628  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:29.757637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:29.757712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:29.790677  993585 cri.go:89] found id: ""
	I0120 12:35:29.790709  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.790720  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:29.790728  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:29.790791  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:29.826917  993585 cri.go:89] found id: ""
	I0120 12:35:29.826953  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.826965  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:29.826974  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:29.827039  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:29.861866  993585 cri.go:89] found id: ""
	I0120 12:35:29.861897  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.861908  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:29.861916  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:29.861973  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:29.895508  993585 cri.go:89] found id: ""
	I0120 12:35:29.895543  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.895554  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:29.895563  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:29.895623  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:29.927907  993585 cri.go:89] found id: ""
	I0120 12:35:29.927939  993585 logs.go:282] 0 containers: []
	W0120 12:35:29.927949  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:29.927961  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:29.927976  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:29.968111  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:29.968149  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:30.038475  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:30.038529  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:30.051650  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:30.051679  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:30.117850  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:30.117880  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:30.117896  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:34.909127  993131 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:35:34.909216  993131 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:34.909344  993131 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:34.909477  993131 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:34.909620  993131 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:35:34.909715  993131 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:34.911105  993131 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:34.911202  993131 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:34.911293  993131 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:34.911398  993131 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:34.911468  993131 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:34.911533  993131 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:34.911590  993131 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:34.911674  993131 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:34.911735  993131 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:34.911828  993131 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:34.911943  993131 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:34.912009  993131 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:34.912100  993131 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:34.912190  993131 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:34.912286  993131 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:35:34.912332  993131 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:34.912438  993131 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:34.912528  993131 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:34.912635  993131 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:34.912726  993131 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:34.914123  993131 out.go:235]   - Booting up control plane ...
	I0120 12:35:34.914234  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:34.914348  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:34.914449  993131 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:34.914608  993131 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:34.914688  993131 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:34.914725  993131 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:34.914857  993131 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:35:34.914944  993131 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:35:34.915002  993131 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.58459ms
	I0120 12:35:34.915062  993131 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:35:34.915123  993131 kubeadm.go:310] [api-check] The API server is healthy after 5.503412907s
	I0120 12:35:34.915262  993131 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:35:34.915400  993131 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:35:34.915458  993131 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:35:34.915633  993131 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-981597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:35:34.915681  993131 kubeadm.go:310] [bootstrap-token] Using token: i0tzs5.z567f1ntzr02cqfq
	I0120 12:35:34.916955  993131 out.go:235]   - Configuring RBAC rules ...
	I0120 12:35:34.917087  993131 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:35:34.917200  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:35:34.917374  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:35:34.917519  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:35:34.917673  993131 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:35:34.917794  993131 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:35:34.917950  993131 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:35:34.918013  993131 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:35:34.918074  993131 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:35:34.918083  993131 kubeadm.go:310] 
	I0120 12:35:34.918237  993131 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:35:34.918260  993131 kubeadm.go:310] 
	I0120 12:35:34.918376  993131 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:35:34.918388  993131 kubeadm.go:310] 
	I0120 12:35:34.918425  993131 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:35:34.918506  993131 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:35:34.918601  993131 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:35:34.918613  993131 kubeadm.go:310] 
	I0120 12:35:34.918694  993131 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:35:34.918704  993131 kubeadm.go:310] 
	I0120 12:35:34.918758  993131 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:35:34.918770  993131 kubeadm.go:310] 
	I0120 12:35:34.918843  993131 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:35:34.918947  993131 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:35:34.919045  993131 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:35:34.919057  993131 kubeadm.go:310] 
	I0120 12:35:34.919174  993131 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:35:34.919281  993131 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:35:34.919295  993131 kubeadm.go:310] 
	I0120 12:35:34.919404  993131 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919548  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 \
	I0120 12:35:34.919582  993131 kubeadm.go:310] 	--control-plane 
	I0120 12:35:34.919594  993131 kubeadm.go:310] 
	I0120 12:35:34.919711  993131 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:35:34.919723  993131 kubeadm.go:310] 
	I0120 12:35:34.919827  993131 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token i0tzs5.z567f1ntzr02cqfq \
	I0120 12:35:34.919982  993131 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9daca4b474befed26d429fab1ed19f40e58ea9925316ea59a2e801171f5c9665 
	I0120 12:35:34.919999  993131 cni.go:84] Creating CNI manager for ""
	I0120 12:35:34.920015  993131 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:35:34.921475  993131 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:35:32.712573  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:32.725809  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:32.725886  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:32.761768  993585 cri.go:89] found id: ""
	I0120 12:35:32.761803  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.761812  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:32.761818  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:32.761875  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:32.797578  993585 cri.go:89] found id: ""
	I0120 12:35:32.797610  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.797621  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:32.797628  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:32.797694  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:32.834493  993585 cri.go:89] found id: ""
	I0120 12:35:32.834539  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.834552  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:32.834559  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:32.834644  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:32.870730  993585 cri.go:89] found id: ""
	I0120 12:35:32.870762  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.870774  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:32.870782  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:32.870851  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:32.913904  993585 cri.go:89] found id: ""
	I0120 12:35:32.913932  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.913943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:32.913951  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:32.914019  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:32.955928  993585 cri.go:89] found id: ""
	I0120 12:35:32.955961  993585 logs.go:282] 0 containers: []
	W0120 12:35:32.955972  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:32.955981  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:32.956044  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:33.001075  993585 cri.go:89] found id: ""
	I0120 12:35:33.001116  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.001129  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:33.001138  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:33.001209  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:33.035918  993585 cri.go:89] found id: ""
	I0120 12:35:33.035954  993585 logs.go:282] 0 containers: []
	W0120 12:35:33.035961  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:33.035971  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:33.035981  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:33.090782  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:33.090816  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:33.107144  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:33.107171  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:33.184808  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:33.184830  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:33.184845  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:33.269131  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:33.269170  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:35.809619  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:35.822178  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:35.822254  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:35.862005  993585 cri.go:89] found id: ""
	I0120 12:35:35.862035  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.862042  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:35.862050  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:35.862110  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:35.896880  993585 cri.go:89] found id: ""
	I0120 12:35:35.896909  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.896920  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:35.896928  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:35.896995  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:35.931762  993585 cri.go:89] found id: ""
	I0120 12:35:35.931795  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.931806  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:35.931815  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:35.931882  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:35.965205  993585 cri.go:89] found id: ""
	I0120 12:35:35.965236  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.965246  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:35.965254  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:35.965310  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:35.999903  993585 cri.go:89] found id: ""
	I0120 12:35:35.999926  993585 logs.go:282] 0 containers: []
	W0120 12:35:35.999943  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:35.999956  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:36.000004  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:36.033944  993585 cri.go:89] found id: ""
	I0120 12:35:36.033981  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.033992  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:36.034005  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:36.034073  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:36.066986  993585 cri.go:89] found id: ""
	I0120 12:35:36.067021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.067035  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:36.067043  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:36.067108  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:36.096989  993585 cri.go:89] found id: ""
	I0120 12:35:36.097021  993585 logs.go:282] 0 containers: []
	W0120 12:35:36.097033  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:36.097047  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:36.097062  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:36.170812  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:36.170838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:36.208578  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:36.208619  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:36.259448  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:36.259483  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:36.273938  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:36.273968  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:36.342621  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:34.922590  993131 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:35:34.933756  993131 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:35:34.952622  993131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:35:34.952700  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:34.952763  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-981597 minikube.k8s.io/updated_at=2025_01_20T12_35_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=default-k8s-diff-port-981597 minikube.k8s.io/primary=true
	I0120 12:35:35.145316  993131 ops.go:34] apiserver oom_adj: -16
	I0120 12:35:35.161459  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:35.662404  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.162367  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:36.662373  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.162163  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:37.661727  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.161998  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:38.662452  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.161911  993131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:35:39.336211  993131 kubeadm.go:1113] duration metric: took 4.383561407s to wait for elevateKubeSystemPrivileges
	I0120 12:35:39.336266  993131 kubeadm.go:394] duration metric: took 5m4.484253589s to StartCluster
	I0120 12:35:39.336293  993131 settings.go:142] acquiring lock: {Name:mk1751cb86b61fcfcb0ff093c93fb653ca2002ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.336426  993131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:35:39.338834  993131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-942401/kubeconfig: {Name:mk7b2d17f701fc845d991764ab9cc32b8b0646e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:35:39.339088  993131 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:35:39.339220  993131 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:35:39.339332  993131 config.go:182] Loaded profile config "default-k8s-diff-port-981597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:35:39.339365  993131 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339391  993131 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-981597"
	I0120 12:35:39.339390  993131 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-981597"
	W0120 12:35:39.339401  993131 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:35:39.339408  993131 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339418  993131 addons.go:247] addon dashboard should already be in state true
	I0120 12:35:39.339411  993131 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339435  993131 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.339444  993131 addons.go:247] addon metrics-server should already be in state true
	I0120 12:35:39.339444  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339451  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339474  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.339390  993131 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-981597"
	I0120 12:35:39.339493  993131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-981597"
	I0120 12:35:39.339824  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339865  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.339923  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.339892  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340012  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.340084  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.340125  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.343052  993131 out.go:177] * Verifying Kubernetes components...
	I0120 12:35:39.344268  993131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:35:39.360766  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0120 12:35:39.360936  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0120 12:35:39.361027  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0120 12:35:39.361484  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361615  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361686  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.361937  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.361959  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362058  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362066  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362167  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.362178  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.362512  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362592  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362613  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.362835  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.363083  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.363147  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.363178  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0120 12:35:39.363870  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.364373  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.364508  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.364871  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.364893  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.365250  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.365757  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.365799  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.366758  993131 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-981597"
	W0120 12:35:39.366781  993131 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:35:39.366816  993131 host.go:66] Checking if "default-k8s-diff-port-981597" exists ...
	I0120 12:35:39.367172  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.367210  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.385700  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0120 12:35:39.386220  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.386752  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.386776  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.387167  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.387430  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.388835  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0120 12:35:39.389074  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I0120 12:35:39.389290  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389718  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.389796  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.389819  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390265  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.390287  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.390316  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.390346  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.390828  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.391044  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.391081  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.392517  993131 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:35:39.392556  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0120 12:35:39.393043  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.393711  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.393715  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.393730  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.394195  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.394747  993131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:35:39.394793  993131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:35:39.395249  993131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:35:39.395355  993131 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:35:39.395403  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.396870  993131 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.396892  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:35:39.396914  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.396998  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:35:39.397017  993131 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:35:39.397039  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.399496  993131 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:35:38.843738  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:38.856444  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:38.856506  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:38.892000  993585 cri.go:89] found id: ""
	I0120 12:35:38.892027  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.892037  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:38.892043  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:38.892093  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:38.930509  993585 cri.go:89] found id: ""
	I0120 12:35:38.930558  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.930569  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:38.930577  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:38.930643  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:38.976632  993585 cri.go:89] found id: ""
	I0120 12:35:38.976675  993585 logs.go:282] 0 containers: []
	W0120 12:35:38.976687  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:38.976695  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:38.976763  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:39.021957  993585 cri.go:89] found id: ""
	I0120 12:35:39.021993  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.022004  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:39.022011  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:39.022080  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:39.060311  993585 cri.go:89] found id: ""
	I0120 12:35:39.060352  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.060366  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:39.060375  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:39.060441  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:39.097901  993585 cri.go:89] found id: ""
	I0120 12:35:39.097939  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.097952  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:39.097961  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:39.098029  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:39.135291  993585 cri.go:89] found id: ""
	I0120 12:35:39.135328  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.135341  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:39.135349  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:39.135415  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:39.178737  993585 cri.go:89] found id: ""
	I0120 12:35:39.178775  993585 logs.go:282] 0 containers: []
	W0120 12:35:39.178810  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:39.178822  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:39.178838  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:39.228677  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:39.228723  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:39.281237  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:39.281274  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:39.298505  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:39.298554  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:39.387325  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:39.387350  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:39.387364  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:39.400927  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:35:39.400947  993131 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:35:39.400969  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.401577  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401584  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401591  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401608  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401620  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.401625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401641  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.401644  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.401851  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.401948  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.402022  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402053  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.402154  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.402468  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.404077  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.406625  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.406703  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.406720  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.410708  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.410899  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.411057  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.414646  993131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0120 12:35:39.415080  993131 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:35:39.415539  993131 main.go:141] libmachine: Using API Version  1
	I0120 12:35:39.415560  993131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:35:39.415922  993131 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:35:39.416132  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetState
	I0120 12:35:39.417677  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .DriverName
	I0120 12:35:39.417895  993131 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.417909  993131 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:35:39.417927  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHHostname
	I0120 12:35:39.422636  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422665  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4a:e1", ip: ""} in network mk-default-k8s-diff-port-981597: {Iface:virbr1 ExpiryTime:2025-01-20 13:30:20 +0000 UTC Type:0 Mac:52:54:00:a7:4a:e1 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:default-k8s-diff-port-981597 Clientid:01:52:54:00:a7:4a:e1}
	I0120 12:35:39.422682  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | domain default-k8s-diff-port-981597 has defined IP address 192.168.39.222 and MAC address 52:54:00:a7:4a:e1 in network mk-default-k8s-diff-port-981597
	I0120 12:35:39.422694  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHPort
	I0120 12:35:39.424675  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHKeyPath
	I0120 12:35:39.424843  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .GetSSHUsername
	I0120 12:35:39.424988  993131 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/default-k8s-diff-port-981597/id_rsa Username:docker}
	I0120 12:35:39.601008  993131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:35:39.644654  993131 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675702  993131 node_ready.go:49] node "default-k8s-diff-port-981597" has status "Ready":"True"
	I0120 12:35:39.675723  993131 node_ready.go:38] duration metric: took 31.032591ms for node "default-k8s-diff-port-981597" to be "Ready" ...
	I0120 12:35:39.675734  993131 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:39.685490  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:39.768195  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:35:39.768218  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:35:39.812873  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:35:39.812897  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:35:39.822881  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:35:39.825928  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:35:39.846613  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:35:39.846645  993131 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:35:39.883996  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:35:39.884037  993131 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:35:39.935435  993131 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:39.935470  993131 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:35:39.992813  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:35:39.992840  993131 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:35:40.026214  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:35:40.069154  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:35:40.069190  993131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:35:40.121948  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:35:40.121983  993131 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:35:40.243520  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:35:40.243553  993131 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:35:40.252481  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252512  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.252849  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.252872  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.252885  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.252900  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.253335  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.253397  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.253372  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.257887  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.257903  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.258196  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.258214  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.295226  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:35:40.295255  993131 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:35:40.386270  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:35:40.386304  993131 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 12:35:40.478877  993131 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.478909  993131 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:35:40.533601  993131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:35:40.863384  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.037420526s)
	I0120 12:35:40.863438  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863447  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.863790  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:40.863831  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.863841  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.863851  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:40.863864  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:40.864124  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:40.864145  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:40.864150  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.207665  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181404643s)
	I0120 12:35:41.207727  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.207743  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208079  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208098  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208117  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.208126  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.208422  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.208445  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.208445  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.208456  993131 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-981597"
	I0120 12:35:41.719786  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:41.719813  993131 pod_ready.go:82] duration metric: took 2.034287913s for pod "coredns-668d6bf9bc-cn8tc" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.719823  993131 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:41.984277  993131 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.450618233s)
	I0120 12:35:41.984341  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984368  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984689  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.984706  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.984718  993131 main.go:141] libmachine: Making call to close driver server
	I0120 12:35:41.984728  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) Calling .Close
	I0120 12:35:41.984738  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985071  993131 main.go:141] libmachine: (default-k8s-diff-port-981597) DBG | Closing plugin on server side
	I0120 12:35:41.985119  993131 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:35:41.985138  993131 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:35:41.986711  993131 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-981597 addons enable metrics-server
	
	I0120 12:35:41.988326  993131 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:35:41.989523  993131 addons.go:514] duration metric: took 2.650315965s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:35:43.726169  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:41.981886  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:41.996139  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:35:41.996203  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:35:42.028240  993585 cri.go:89] found id: ""
	I0120 12:35:42.028267  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.028279  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:35:42.028287  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:35:42.028351  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:35:42.063513  993585 cri.go:89] found id: ""
	I0120 12:35:42.063544  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.063553  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:35:42.063561  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:35:42.063622  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:35:42.095602  993585 cri.go:89] found id: ""
	I0120 12:35:42.095637  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.095648  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:35:42.095656  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:35:42.095712  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:35:42.128427  993585 cri.go:89] found id: ""
	I0120 12:35:42.128460  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.128471  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:35:42.128478  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:35:42.128539  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:35:42.163430  993585 cri.go:89] found id: ""
	I0120 12:35:42.163462  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.163473  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:35:42.163487  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:35:42.163601  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:35:42.212225  993585 cri.go:89] found id: ""
	I0120 12:35:42.212251  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.212259  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:35:42.212265  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:35:42.212326  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:35:42.251596  993585 cri.go:89] found id: ""
	I0120 12:35:42.251623  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.251631  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:35:42.251637  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:35:42.251697  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:35:42.288436  993585 cri.go:89] found id: ""
	I0120 12:35:42.288472  993585 logs.go:282] 0 containers: []
	W0120 12:35:42.288485  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:35:42.288498  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:35:42.288514  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 12:35:42.351809  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:35:42.351858  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:35:42.367697  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:35:42.367740  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:35:42.445420  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:35:42.445452  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:35:42.445470  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:35:42.529150  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:35:42.529190  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:35:45.068423  993585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:45.083648  993585 kubeadm.go:597] duration metric: took 4m4.248047549s to restartPrimaryControlPlane
	W0120 12:35:45.083733  993585 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 12:35:45.083773  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:35:48.615167  993585 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.531361181s)
	I0120 12:35:48.615262  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:35:48.629340  993585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:35:48.640853  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:35:48.653161  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:35:48.653181  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:35:48.653220  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:35:48.662422  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:35:48.662489  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:35:48.672006  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:35:48.681430  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:35:48.681493  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:35:48.690703  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.699479  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:35:48.699551  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:35:48.708576  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:35:48.717379  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:35:48.717440  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:35:48.727690  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:35:48.809089  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:35:48.809181  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:35:48.968180  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:35:48.968344  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:35:48.968503  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:35:49.164019  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:35:45.813799  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.227053  993131 pod_ready.go:103] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"False"
	I0120 12:35:48.729367  993131 pod_ready.go:93] pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.729409  993131 pod_ready.go:82] duration metric: took 7.009577783s for pod "coredns-668d6bf9bc-g9m4p" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.729423  993131 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735596  993131 pod_ready.go:93] pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.735621  993131 pod_ready.go:82] duration metric: took 6.188248ms for pod "etcd-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.735635  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748236  993131 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.748262  993131 pod_ready.go:82] duration metric: took 12.618834ms for pod "kube-apiserver-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.748275  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758672  993131 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.758703  993131 pod_ready.go:82] duration metric: took 10.418952ms for pod "kube-controller-manager-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.758717  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766403  993131 pod_ready.go:93] pod "kube-proxy-sn66t" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:48.766423  993131 pod_ready.go:82] duration metric: took 7.698237ms for pod "kube-proxy-sn66t" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:48.766433  993131 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124688  993131 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace has status "Ready":"True"
	I0120 12:35:49.124714  993131 pod_ready.go:82] duration metric: took 358.274237ms for pod "kube-scheduler-default-k8s-diff-port-981597" in "kube-system" namespace to be "Ready" ...
	I0120 12:35:49.124723  993131 pod_ready.go:39] duration metric: took 9.44898025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:35:49.124740  993131 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:35:49.124803  993131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:35:49.172406  993131 api_server.go:72] duration metric: took 9.833266884s to wait for apiserver process to appear ...
	I0120 12:35:49.172434  993131 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:35:49.172459  993131 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8444/healthz ...
	I0120 12:35:49.177280  993131 api_server.go:279] https://192.168.39.222:8444/healthz returned 200:
	ok
	I0120 12:35:49.178469  993131 api_server.go:141] control plane version: v1.32.0
	I0120 12:35:49.178498  993131 api_server.go:131] duration metric: took 6.05652ms to wait for apiserver health ...
	I0120 12:35:49.178508  993131 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:35:49.166637  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:35:49.166743  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:35:49.166851  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:35:49.166969  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:35:49.167055  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:35:49.167163  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:35:49.167247  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:35:49.167333  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:35:49.167596  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:35:49.167953  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:35:49.168592  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:35:49.168717  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:35:49.168824  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:35:49.305660  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:35:49.652487  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:35:49.782615  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:35:49.921695  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:35:49.937706  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:35:49.939001  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:35:49.939074  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:35:50.070984  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:35:50.072848  993585 out.go:235]   - Booting up control plane ...
	I0120 12:35:50.072980  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:35:50.082351  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:35:50.082939  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:35:50.083932  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:35:50.088842  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:35:49.328775  993131 system_pods.go:59] 9 kube-system pods found
	I0120 12:35:49.328811  993131 system_pods.go:61] "coredns-668d6bf9bc-cn8tc" [19a18120-8f3f-45bd-92f3-c291423f4895] Running
	I0120 12:35:49.328819  993131 system_pods.go:61] "coredns-668d6bf9bc-g9m4p" [9e3e4568-92ab-4ee5-b10a-5489b72248d6] Running
	I0120 12:35:49.328825  993131 system_pods.go:61] "etcd-default-k8s-diff-port-981597" [82f73dcc-1328-428e-8eb7-550c9b2d2b22] Running
	I0120 12:35:49.328831  993131 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-981597" [ff2d67bb-7ff8-44ac-a043-b6f423339fc7] Running
	I0120 12:35:49.328837  993131 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-981597" [fa91d7b8-200d-464f-b2b0-3a08a4f435d8] Running
	I0120 12:35:49.328842  993131 system_pods.go:61] "kube-proxy-sn66t" [a90855a0-c87a-4b55-bd0e-4b95b062479d] Running
	I0120 12:35:49.328847  993131 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-981597" [26bb9f8b-4e05-4cb9-a863-75d6a6a6b652] Running
	I0120 12:35:49.328856  993131 system_pods.go:61] "metrics-server-f79f97bbb-xkrxx" [cf78f231-b1e0-4566-817b-bfb9b8dac3f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:35:49.328862  993131 system_pods.go:61] "storage-provisioner" [e77b12e8-25f3-43ad-8588-2716dd4ccbd1] Running
	I0120 12:35:49.328876  993131 system_pods.go:74] duration metric: took 150.359796ms to wait for pod list to return data ...
	I0120 12:35:49.328889  993131 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:35:49.619916  993131 default_sa.go:45] found service account: "default"
	I0120 12:35:49.619954  993131 default_sa.go:55] duration metric: took 291.056324ms for default service account to be created ...
	I0120 12:35:49.619967  993131 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:35:49.728886  993131 system_pods.go:87] 9 kube-system pods found
	I0120 12:36:30.091045  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:36:30.091553  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:30.091777  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:35.092197  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:35.092442  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:36:45.093033  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:36:45.093302  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:05.094270  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:05.094487  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096146  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:37:45.096378  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:37:45.096414  993585 kubeadm.go:310] 
	I0120 12:37:45.096477  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:37:45.096535  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:37:45.096547  993585 kubeadm.go:310] 
	I0120 12:37:45.096623  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:37:45.096688  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:37:45.096836  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:37:45.096847  993585 kubeadm.go:310] 
	I0120 12:37:45.096982  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:37:45.097022  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:37:45.097075  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:37:45.097088  993585 kubeadm.go:310] 
	I0120 12:37:45.097213  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:37:45.097323  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:37:45.097344  993585 kubeadm.go:310] 
	I0120 12:37:45.097440  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:37:45.097575  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:37:45.097684  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:37:45.097786  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:37:45.097798  993585 kubeadm.go:310] 
	I0120 12:37:45.098707  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:37:45.098836  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:37:45.098939  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 12:37:45.099133  993585 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 12:37:45.099186  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 12:37:45.553353  993585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:37:45.568252  993585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:37:45.577030  993585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:37:45.577047  993585 kubeadm.go:157] found existing configuration files:
	
	I0120 12:37:45.577084  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:37:45.585663  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:37:45.585715  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:37:45.594051  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:37:45.602109  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:37:45.602159  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:37:45.610431  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.619241  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:37:45.619279  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:37:45.627467  993585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:37:45.636457  993585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:37:45.636508  993585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:37:45.644627  993585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:37:45.711254  993585 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 12:37:45.711363  993585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:37:45.852391  993585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:37:45.852543  993585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:37:45.852693  993585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 12:37:46.034483  993585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:37:46.036223  993585 out.go:235]   - Generating certificates and keys ...
	I0120 12:37:46.036346  993585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:37:46.036455  993585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:37:46.036570  993585 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 12:37:46.036663  993585 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 12:37:46.036789  993585 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 12:37:46.036889  993585 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 12:37:46.037251  993585 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 12:37:46.037740  993585 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 12:37:46.038025  993585 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 12:37:46.038414  993585 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 12:37:46.038478  993585 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 12:37:46.038581  993585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:37:46.266444  993585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:37:46.393858  993585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:37:46.536948  993585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:37:46.765338  993585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:37:46.783975  993585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:37:46.785028  993585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:37:46.785076  993585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:37:46.920894  993585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:37:46.922757  993585 out.go:235]   - Booting up control plane ...
	I0120 12:37:46.922892  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:37:46.929056  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:37:46.933400  993585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:37:46.933527  993585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:37:46.939663  993585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 12:38:26.942147  993585 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 12:38:26.942793  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:26.943016  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:31.943340  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:31.943563  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:38:41.944064  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:38:41.944316  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:01.944375  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:01.944608  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943032  993585 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 12:39:41.943264  993585 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 12:39:41.943273  993585 kubeadm.go:310] 
	I0120 12:39:41.943326  993585 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 12:39:41.943363  993585 kubeadm.go:310] 		timed out waiting for the condition
	I0120 12:39:41.943383  993585 kubeadm.go:310] 
	I0120 12:39:41.943444  993585 kubeadm.go:310] 	This error is likely caused by:
	I0120 12:39:41.943506  993585 kubeadm.go:310] 		- The kubelet is not running
	I0120 12:39:41.943609  993585 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 12:39:41.943617  993585 kubeadm.go:310] 
	I0120 12:39:41.943716  993585 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 12:39:41.943762  993585 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 12:39:41.943814  993585 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 12:39:41.943826  993585 kubeadm.go:310] 
	I0120 12:39:41.943914  993585 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 12:39:41.944033  993585 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 12:39:41.944052  993585 kubeadm.go:310] 
	I0120 12:39:41.944219  993585 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 12:39:41.944348  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 12:39:41.944450  993585 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 12:39:41.944591  993585 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 12:39:41.944613  993585 kubeadm.go:310] 
	I0120 12:39:41.945529  993585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:39:41.945621  993585 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 12:39:41.945690  993585 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 12:39:41.945758  993585 kubeadm.go:394] duration metric: took 8m1.157734369s to StartCluster
	I0120 12:39:41.945816  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:39:41.945871  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:39:41.989147  993585 cri.go:89] found id: ""
	I0120 12:39:41.989175  993585 logs.go:282] 0 containers: []
	W0120 12:39:41.989183  993585 logs.go:284] No container was found matching "kube-apiserver"
	I0120 12:39:41.989188  993585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 12:39:41.989251  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:39:42.021608  993585 cri.go:89] found id: ""
	I0120 12:39:42.021631  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.021639  993585 logs.go:284] No container was found matching "etcd"
	I0120 12:39:42.021646  993585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 12:39:42.021706  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:39:42.062565  993585 cri.go:89] found id: ""
	I0120 12:39:42.062592  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.062601  993585 logs.go:284] No container was found matching "coredns"
	I0120 12:39:42.062607  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:39:42.062659  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:39:42.097040  993585 cri.go:89] found id: ""
	I0120 12:39:42.097067  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.097075  993585 logs.go:284] No container was found matching "kube-scheduler"
	I0120 12:39:42.097081  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:39:42.097144  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:39:42.128833  993585 cri.go:89] found id: ""
	I0120 12:39:42.128862  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.128873  993585 logs.go:284] No container was found matching "kube-proxy"
	I0120 12:39:42.128880  993585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:39:42.128936  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:39:42.159564  993585 cri.go:89] found id: ""
	I0120 12:39:42.159596  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.159608  993585 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 12:39:42.159616  993585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 12:39:42.159676  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:39:42.189336  993585 cri.go:89] found id: ""
	I0120 12:39:42.189367  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.189378  993585 logs.go:284] No container was found matching "kindnet"
	I0120 12:39:42.189386  993585 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:39:42.189450  993585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:39:42.228745  993585 cri.go:89] found id: ""
	I0120 12:39:42.228776  993585 logs.go:282] 0 containers: []
	W0120 12:39:42.228787  993585 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 12:39:42.228801  993585 logs.go:123] Gathering logs for dmesg ...
	I0120 12:39:42.228818  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:39:42.244466  993585 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:39:42.244508  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 12:39:42.336809  993585 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 12:39:42.336832  993585 logs.go:123] Gathering logs for CRI-O ...
	I0120 12:39:42.336844  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 12:39:42.443413  993585 logs.go:123] Gathering logs for container status ...
	I0120 12:39:42.443445  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:39:42.481436  993585 logs.go:123] Gathering logs for kubelet ...
	I0120 12:39:42.481466  993585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:39:42.533396  993585 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 12:39:42.533472  993585 out.go:270] * 
	W0120 12:39:42.533585  993585 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.533610  993585 out.go:270] * 
	W0120 12:39:42.534617  993585 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:39:42.537661  993585 out.go:201] 
	W0120 12:39:42.538809  993585 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 12:39:42.538865  993585 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 12:39:42.538897  993585 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 12:39:42.540269  993585 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.171044677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377698171022290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f749eb0e-7a97-4f36-a453-f0923bf9101d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.171562069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8b65c2d-6f24-4aa9-afb0-06a68c2164e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.171620523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8b65c2d-6f24-4aa9-afb0-06a68c2164e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.171660877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e8b65c2d-6f24-4aa9-afb0-06a68c2164e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.203050632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78255d7d-3152-4ed8-95b6-2d0854ab1b77 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.203197384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78255d7d-3152-4ed8-95b6-2d0854ab1b77 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.204674500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad31ba74-6229-468c-b249-a0eb70fef88c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.205140842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377698205114907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad31ba74-6229-468c-b249-a0eb70fef88c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.205541699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6eb498c1-a054-4a41-b598-877be11f711c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.205595369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6eb498c1-a054-4a41-b598-877be11f711c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.205633684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6eb498c1-a054-4a41-b598-877be11f711c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.239570366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f00b938-c843-42e1-9b37-a3520bc1a440 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.239639951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f00b938-c843-42e1-9b37-a3520bc1a440 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.240699517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2b038dc-8a36-40b7-b093-e8c72cf9bb1f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.241150788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377698241125957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2b038dc-8a36-40b7-b093-e8c72cf9bb1f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.241629532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2683017a-5644-4534-af62-a7286d5858fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.241701562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2683017a-5644-4534-af62-a7286d5858fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.241733751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2683017a-5644-4534-af62-a7286d5858fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.275870906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d612f7c6-0884-496f-96f2-ad01aa8231b1 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.275968876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d612f7c6-0884-496f-96f2-ad01aa8231b1 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.277122182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db4d14ec-6d11-48aa-a024-96f79de6e25b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.277544001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377698277500956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db4d14ec-6d11-48aa-a024-96f79de6e25b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.278104712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b994da7d-3a88-453c-b71c-3c21dc3dafad name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.278175348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b994da7d-3a88-453c-b71c-3c21dc3dafad name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:54:58 old-k8s-version-134433 crio[630]: time="2025-01-20 12:54:58.278215259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b994da7d-3a88-453c-b71c-3c21dc3dafad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054920] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043464] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.939919] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.154572] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498654] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.775976] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.069639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050163] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.195196] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.136181] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.241855] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.257251] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.068017] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.557848] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.735598] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 12:35] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Jan20 12:37] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +0.069529] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:54:58 up 23 min,  0 users,  load average: 0.07, 0.05, 0.03
	Linux old-k8s-version-134433 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000cea2a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008a7440, 0x24, 0x60, 0x7efeee2ec1c8, 0x118, ...)
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: net/http.(*Transport).dial(0xc000695540, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008a7440, 0x24, 0x0, 0x0, 0x0, ...)
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: net/http.(*Transport).dialConn(0xc000695540, 0x4f7fe00, 0xc000120018, 0x0, 0xc00019f080, 0x5, 0xc0008a7440, 0x24, 0x0, 0xc000035440, ...)
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: net/http.(*Transport).dialConnFor(0xc000695540, 0xc000027810)
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: created by net/http.(*Transport).queueForDial
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: goroutine 155 [select]:
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0001ceae0, 0xc000d20f00, 0xc00019f320, 0xc00019f2c0)
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]: created by net.(*netFD).connect
	Jan 20 12:54:55 old-k8s-version-134433 kubelet[7236]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jan 20 12:54:55 old-k8s-version-134433 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 12:54:55 old-k8s-version-134433 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 12:54:56 old-k8s-version-134433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 179.
	Jan 20 12:54:56 old-k8s-version-134433 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 12:54:56 old-k8s-version-134433 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 12:54:56 old-k8s-version-134433 kubelet[7245]: I0120 12:54:56.542618    7245 server.go:416] Version: v1.20.0
	Jan 20 12:54:56 old-k8s-version-134433 kubelet[7245]: I0120 12:54:56.542864    7245 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 12:54:56 old-k8s-version-134433 kubelet[7245]: I0120 12:54:56.544826    7245 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 12:54:56 old-k8s-version-134433 kubelet[7245]: W0120 12:54:56.545788    7245 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 12:54:56 old-k8s-version-134433 kubelet[7245]: I0120 12:54:56.546163    7245 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 2 (244.493705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-134433" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (372.08s)

                                                
                                    

Test pass (255/308)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.97
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.0/json-events 12.85
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.06
18 TestDownloadOnly/v1.32.0/DeleteAll 0.13
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 58.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 130.55
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.49
35 TestAddons/parallel/Registry 17.92
37 TestAddons/parallel/InspektorGadget 10.7
38 TestAddons/parallel/MetricsServer 6.47
40 TestAddons/parallel/CSI 61.53
41 TestAddons/parallel/Headlamp 19.81
42 TestAddons/parallel/CloudSpanner 6.64
43 TestAddons/parallel/LocalPath 14.26
44 TestAddons/parallel/NvidiaDevicePlugin 6.04
45 TestAddons/parallel/Yakd 11.94
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 44.62
49 TestCertExpiration 277.53
51 TestForceSystemdFlag 67.87
52 TestForceSystemdEnv 69.35
54 TestKVMDriverInstallOrUpdate 4.55
58 TestErrorSpam/setup 41.84
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.8
61 TestErrorSpam/pause 1.61
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 4.43
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 56.26
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.96
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.77
75 TestFunctional/serial/CacheCmd/cache/add_local 2.48
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 30.11
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 3.82
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 32.7
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 10.51
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 43.76
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 22.56
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.65
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 1.17
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.61
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
120 TestFunctional/parallel/ImageCommands/ImageListYaml 1.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.41
122 TestFunctional/parallel/ImageCommands/Setup 1.76
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
124 TestFunctional/parallel/ProfileCmd/profile_list 0.54
125 TestFunctional/parallel/MountCmd/any-port 8.64
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.25
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
143 TestFunctional/parallel/MountCmd/specific-port 1.88
144 TestFunctional/parallel/ServiceCmd/List 0.34
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
147 TestFunctional/parallel/ServiceCmd/Format 0.72
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
152 TestFunctional/parallel/ServiceCmd/URL 0.48
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 190.49
160 TestMultiControlPlane/serial/DeployApp 6.99
161 TestMultiControlPlane/serial/PingHostFromPods 1.17
162 TestMultiControlPlane/serial/AddWorkerNode 58.51
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 12.76
166 TestMultiControlPlane/serial/StopSecondaryNode 91.42
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
168 TestMultiControlPlane/serial/RestartSecondaryNode 52.66
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 463.67
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.08
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
173 TestMultiControlPlane/serial/StopCluster 272.91
174 TestMultiControlPlane/serial/RestartCluster 117.61
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
176 TestMultiControlPlane/serial/AddSecondaryNode 73.65
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
181 TestJSONOutput/start/Command 53.44
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.63
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.32
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 82.56
213 TestMountStart/serial/StartWithMountFirst 29.05
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 25.5
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.89
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 21.75
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 109.73
225 TestMultiNode/serial/DeployApp2Nodes 5.36
226 TestMultiNode/serial/PingHostFrom2Pods 0.76
227 TestMultiNode/serial/AddNode 46.62
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.59
230 TestMultiNode/serial/CopyFile 7.28
231 TestMultiNode/serial/StopNode 2.33
232 TestMultiNode/serial/StartAfterStop 38.81
233 TestMultiNode/serial/RestartKeepsNodes 324.2
234 TestMultiNode/serial/DeleteNode 2.54
235 TestMultiNode/serial/StopMultiNode 182.06
236 TestMultiNode/serial/RestartMultiNode 98.62
237 TestMultiNode/serial/ValidateNameConflict 40.33
244 TestScheduledStopUnix 110.55
248 TestRunningBinaryUpgrade 224.94
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 94.66
263 TestPause/serial/Start 101.18
264 TestNoKubernetes/serial/StartWithStopK8s 66.78
266 TestNoKubernetes/serial/Start 34.24
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
268 TestNoKubernetes/serial/ProfileList 17.52
276 TestNetworkPlugins/group/false 2.86
280 TestStoppedBinaryUpgrade/Setup 2.27
281 TestStoppedBinaryUpgrade/Upgrade 132.97
282 TestNoKubernetes/serial/Stop 1.3
283 TestNoKubernetes/serial/StartNoArgs 38.24
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
289 TestStartStop/group/no-preload/serial/FirstStart 77.86
290 TestStartStop/group/no-preload/serial/DeployApp 11.31
292 TestStartStop/group/embed-certs/serial/FirstStart 53.85
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
294 TestStartStop/group/no-preload/serial/Stop 91.13
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.21
297 TestStartStop/group/embed-certs/serial/DeployApp 11.27
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
299 TestStartStop/group/embed-certs/serial/Stop 90.9
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.05
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
311 TestStartStop/group/old-k8s-version/serial/Stop 3.29
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/newest-cni/serial/FirstStart 47.58
318 TestNetworkPlugins/group/auto/Start 65.69
319 TestStartStop/group/newest-cni/serial/DeployApp 0
320 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
321 TestStartStop/group/newest-cni/serial/Stop 11.73
322 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.91
323 TestStartStop/group/newest-cni/serial/SecondStart 36.84
324 TestNetworkPlugins/group/auto/KubeletFlags 0.24
325 TestNetworkPlugins/group/auto/NetCatPod 13.31
326 TestNetworkPlugins/group/auto/DNS 0.18
327 TestNetworkPlugins/group/auto/Localhost 0.15
328 TestNetworkPlugins/group/auto/HairPin 0.15
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/newest-cni/serial/Pause 2.96
333 TestNetworkPlugins/group/kindnet/Start 62.84
334 TestNetworkPlugins/group/calico/Start 116.05
335 TestNetworkPlugins/group/custom-flannel/Start 124.73
336 TestNetworkPlugins/group/enable-default-cni/Start 89.11
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
339 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
340 TestNetworkPlugins/group/kindnet/DNS 0.17
341 TestNetworkPlugins/group/kindnet/Localhost 0.12
342 TestNetworkPlugins/group/kindnet/HairPin 0.12
343 TestNetworkPlugins/group/flannel/Start 88.39
344 TestNetworkPlugins/group/calico/ControllerPod 6.01
345 TestNetworkPlugins/group/calico/KubeletFlags 0.2
346 TestNetworkPlugins/group/calico/NetCatPod 12.23
347 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
348 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
349 TestNetworkPlugins/group/calico/DNS 0.18
350 TestNetworkPlugins/group/calico/Localhost 0.15
351 TestNetworkPlugins/group/calico/HairPin 0.15
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.27
354 TestNetworkPlugins/group/custom-flannel/DNS 0.17
355 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
356 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
357 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
358 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
359 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
360 TestNetworkPlugins/group/bridge/Start 60.84
361 TestNetworkPlugins/group/flannel/ControllerPod 6.01
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
363 TestNetworkPlugins/group/flannel/NetCatPod 9.24
364 TestNetworkPlugins/group/flannel/DNS 0.15
365 TestNetworkPlugins/group/flannel/Localhost 0.13
366 TestNetworkPlugins/group/flannel/HairPin 0.13
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
368 TestNetworkPlugins/group/bridge/NetCatPod 11.22
369 TestNetworkPlugins/group/bridge/DNS 0.13
370 TestNetworkPlugins/group/bridge/Localhost 0.11
371 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (22.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-060504 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-060504 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.973544689s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 11:22:11.767018  949656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 11:22:11.767127  949656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-060504
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-060504: exit status 85 (62.484358ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-060504 | jenkins | v1.35.0 | 20 Jan 25 11:21 UTC |          |
	|         | -p download-only-060504        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:21:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:21:48.835608  949667 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:21:48.836006  949667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:21:48.836018  949667 out.go:358] Setting ErrFile to fd 2...
	I0120 11:21:48.836023  949667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:21:48.836251  949667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	W0120 11:21:48.836389  949667 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20151-942401/.minikube/config/config.json: open /home/jenkins/minikube-integration/20151-942401/.minikube/config/config.json: no such file or directory
	I0120 11:21:48.837021  949667 out.go:352] Setting JSON to true
	I0120 11:21:48.838030  949667 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14652,"bootTime":1737357457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:21:48.838153  949667 start.go:139] virtualization: kvm guest
	I0120 11:21:48.840716  949667 out.go:97] [download-only-060504] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0120 11:21:48.840857  949667 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 11:21:48.840879  949667 notify.go:220] Checking for updates...
	I0120 11:21:48.842234  949667 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:21:48.843646  949667 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:21:48.844912  949667 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:21:48.846152  949667 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:21:48.847292  949667 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 11:21:48.849505  949667 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:21:48.849707  949667 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:21:48.880006  949667 out.go:97] Using the kvm2 driver based on user configuration
	I0120 11:21:48.880028  949667 start.go:297] selected driver: kvm2
	I0120 11:21:48.880034  949667 start.go:901] validating driver "kvm2" against <nil>
	I0120 11:21:48.880344  949667 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:21:48.880424  949667 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 11:21:48.895135  949667 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 11:21:48.895179  949667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:21:48.895653  949667 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 11:21:48.895777  949667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:21:48.895804  949667 cni.go:84] Creating CNI manager for ""
	I0120 11:21:48.895853  949667 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 11:21:48.895862  949667 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 11:21:48.895909  949667 start.go:340] cluster config:
	{Name:download-only-060504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-060504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:21:48.896069  949667 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:21:48.897654  949667 out.go:97] Downloading VM boot image ...
	I0120 11:21:48.897691  949667 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 11:21:58.745054  949667 out.go:97] Starting "download-only-060504" primary control-plane node in "download-only-060504" cluster
	I0120 11:21:58.745086  949667 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 11:21:58.840903  949667 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 11:21:58.840935  949667 cache.go:56] Caching tarball of preloaded images
	I0120 11:21:58.841101  949667 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 11:21:58.842691  949667 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 11:21:58.842705  949667 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 11:21:58.938924  949667 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-060504 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-060504
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (12.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-057266 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-057266 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.851081675s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (12.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 11:22:24.944634  949656 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 11:22:24.944678  949656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-057266
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-057266: exit status 85 (63.958364ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-060504 | jenkins | v1.35.0 | 20 Jan 25 11:21 UTC |                     |
	|         | -p download-only-060504        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| delete  | -p download-only-060504        | download-only-060504 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC | 20 Jan 25 11:22 UTC |
	| start   | -o=json --download-only        | download-only-057266 | jenkins | v1.35.0 | 20 Jan 25 11:22 UTC |                     |
	|         | -p download-only-057266        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:22:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:22:12.137167  949904 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:22:12.137277  949904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:22:12.137289  949904 out.go:358] Setting ErrFile to fd 2...
	I0120 11:22:12.137294  949904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:22:12.137511  949904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:22:12.138152  949904 out.go:352] Setting JSON to true
	I0120 11:22:12.139239  949904 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14675,"bootTime":1737357457,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:22:12.139338  949904 start.go:139] virtualization: kvm guest
	I0120 11:22:12.141326  949904 out.go:97] [download-only-057266] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 11:22:12.141467  949904 notify.go:220] Checking for updates...
	I0120 11:22:12.142795  949904 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:22:12.144236  949904 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:22:12.145445  949904 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:22:12.146747  949904 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:22:12.147978  949904 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 11:22:12.150094  949904 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:22:12.150311  949904 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:22:12.181144  949904 out.go:97] Using the kvm2 driver based on user configuration
	I0120 11:22:12.181166  949904 start.go:297] selected driver: kvm2
	I0120 11:22:12.181171  949904 start.go:901] validating driver "kvm2" against <nil>
	I0120 11:22:12.181490  949904 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:22:12.181573  949904 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-942401/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 11:22:12.196254  949904 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 11:22:12.196312  949904 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:22:12.196815  949904 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 11:22:12.196942  949904 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:22:12.196970  949904 cni.go:84] Creating CNI manager for ""
	I0120 11:22:12.197023  949904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 11:22:12.197031  949904 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 11:22:12.197084  949904 start.go:340] cluster config:
	{Name:download-only-057266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-057266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:22:12.197184  949904 iso.go:125] acquiring lock: {Name:mk7f0ac9e7ba04626414e742c3e5292d79996a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 11:22:12.198742  949904 out.go:97] Starting "download-only-057266" primary control-plane node in "download-only-057266" cluster
	I0120 11:22:12.198762  949904 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 11:22:13.125121  949904 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 11:22:13.125167  949904 cache.go:56] Caching tarball of preloaded images
	I0120 11:22:13.125339  949904 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 11:22:13.127178  949904 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 11:22:13.127203  949904 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 11:22:13.224215  949904 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/20151-942401/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-057266 host does not exist
	  To start a cluster, run: "minikube start -p download-only-057266"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-057266
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 11:22:25.532595  949656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-093509 --alsologtostderr --binary-mirror http://127.0.0.1:37297 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-093509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-093509
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (58.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-348074 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-348074 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.364819616s)
helpers_test.go:175: Cleaning up "offline-crio-348074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-348074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-348074: (1.001456912s)
--- PASS: TestOffline (58.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-158281
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-158281: exit status 85 (51.163661ms)

                                                
                                                
-- stdout --
	* Profile "addons-158281" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-158281"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-158281
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-158281: exit status 85 (52.853509ms)

                                                
                                                
-- stdout --
	* Profile "addons-158281" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-158281"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (130.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-158281 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-158281 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.548194696s)
--- PASS: TestAddons/Setup (130.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-158281 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-158281 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-158281 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-158281 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ffeb7439-f2bd-4e16-b03c-51ac6665b4f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ffeb7439-f2bd-4e16-b03c-51ac6665b4f0] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004265135s
addons_test.go:633: (dbg) Run:  kubectl --context addons-158281 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-158281 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-158281 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.414924ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-hrzzv" [429b7809-2f4f-4e55-af7f-3ecbbf87557d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.117978093s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-whl4v" [bdd62f98-726c-40c6-a3c6-fa45328ca334] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004190828s
addons_test.go:331: (dbg) Run:  kubectl --context addons-158281 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-158281 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-158281 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.904003382s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 ip
2025/01/20 11:25:13 [DEBUG] GET http://192.168.39.113:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jcnwj" [a8ab0c85-d59b-4b51-9b19-a32dcdd0df30] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011725652s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable inspektor-gadget --alsologtostderr -v=1: (5.685885694s)
--- PASS: TestAddons/parallel/InspektorGadget (10.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.358495ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-kg4cd" [47516766-d1e7-492c-b56e-4ee032ec8b3f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.115106528s
addons_test.go:402: (dbg) Run:  kubectl --context addons-158281 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable metrics-server --alsologtostderr -v=1: (1.275397457s)
--- PASS: TestAddons/parallel/MetricsServer (6.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 11:25:09.173979  949656 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 11:25:09.179636  949656 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 11:25:09.179664  949656 kapi.go:107] duration metric: took 5.709201ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.722168ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-158281 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-158281 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bd890fc2-8a8b-4d54-acab-07db1d6817da] Pending
helpers_test.go:344: "task-pv-pod" [bd890fc2-8a8b-4d54-acab-07db1d6817da] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bd890fc2-8a8b-4d54-acab-07db1d6817da] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003757365s
addons_test.go:511: (dbg) Run:  kubectl --context addons-158281 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-158281 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-158281 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-158281 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-158281 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-158281 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-158281 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fe03a673-9ccb-4593-9e74-733070f2d568] Pending
helpers_test.go:344: "task-pv-pod-restore" [fe03a673-9ccb-4593-9e74-733070f2d568] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fe03a673-9ccb-4593-9e74-733070f2d568] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004196817s
addons_test.go:553: (dbg) Run:  kubectl --context addons-158281 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-158281 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-158281 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.918223698s)
--- PASS: TestAddons/parallel/CSI (61.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-158281 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-lnsqc" [6a2efb2d-e663-4e52-bc53-c4a8d3ccbc89] Pending
helpers_test.go:344: "headlamp-69d78d796f-lnsqc" [6a2efb2d-e663-4e52-bc53-c4a8d3ccbc89] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-lnsqc" [6a2efb2d-e663-4e52-bc53-c4a8d3ccbc89] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003344037s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable headlamp --alsologtostderr -v=1: (5.824363363s)
--- PASS: TestAddons/parallel/Headlamp (19.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-95d8w" [fbeccd83-db82-4571-b955-8be9096de200] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004137544s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-158281 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-158281 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1e2815a8-1b9a-4f16-829e-d88912d1600a] Pending
helpers_test.go:344: "test-local-path" [1e2815a8-1b9a-4f16-829e-d88912d1600a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1e2815a8-1b9a-4f16-829e-d88912d1600a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1e2815a8-1b9a-4f16-829e-d88912d1600a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004258889s
addons_test.go:906: (dbg) Run:  kubectl --context addons-158281 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 ssh "cat /opt/local-path-provisioner/pvc-154e1d54-dd50-44d3-a13f-5a4e77381800_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-158281 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-158281 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qwbjn" [22f8389b-4a08-44b4-8bf5-4052d2b93153] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.124156925s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-c8hvx" [fef3d0ad-a07e-49e1-8269-5c86df3b1a91] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003067547s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-158281 addons disable yakd --alsologtostderr -v=1: (5.939288038s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-158281
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-158281: (1m30.973275122s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-158281
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-158281
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-158281
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (44.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-600668 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0120 12:24:37.399858  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-600668 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.322618786s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-600668 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-600668 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-600668 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-600668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-600668
--- PASS: TestCertOptions (44.62s)

                                                
                                    
x
+
TestCertExpiration (277.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673364 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673364 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m7.866687764s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673364 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0120 12:27:24.378704  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673364 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.840069463s)
helpers_test.go:175: Cleaning up "cert-expiration-673364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-673364
--- PASS: TestCertExpiration (277.53s)

                                                
                                    
x
+
TestForceSystemdFlag (67.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-595350 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-595350 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.494317587s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-595350 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-595350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-595350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-595350: (1.144001092s)
--- PASS: TestForceSystemdFlag (67.87s)

                                                
                                    
x
+
TestForceSystemdEnv (69.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-414382 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-414382 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.353625388s)
helpers_test.go:175: Cleaning up "force-systemd-env-414382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-414382
--- PASS: TestForceSystemdEnv (69.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.55s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 12:24:52.407260  949656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:52.407465  949656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 12:24:52.438315  949656 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 12:24:52.438853  949656 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 12:24:52.438939  949656 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2124650334/001/docker-machine-driver-kvm2
I0120 12:24:52.686647  949656 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2124650334/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000803800 gz:0xc000803808 tar:0xc0008037b0 tar.bz2:0xc0008037c0 tar.gz:0xc0008037d0 tar.xz:0xc0008037e0 tar.zst:0xc0008037f0 tbz2:0xc0008037c0 tgz:0xc0008037d0 txz:0xc0008037e0 tzst:0xc0008037f0 xz:0xc000803810 zip:0xc000803820 zst:0xc000803818] Getters:map[file:0xc0022a0670 http:0xc000c0a230 https:0xc000c0a280] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0120 12:24:52.686693  949656 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2124650334/001/docker-machine-driver-kvm2
I0120 12:24:55.153366  949656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:55.153473  949656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 12:24:55.193173  949656 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 12:24:55.193215  949656 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 12:24:55.193286  949656 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 12:24:55.193316  949656 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2124650334/002/docker-machine-driver-kvm2
I0120 12:24:55.223878  949656 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2124650334/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000803800 gz:0xc000803808 tar:0xc0008037b0 tar.bz2:0xc0008037c0 tar.gz:0xc0008037d0 tar.xz:0xc0008037e0 tar.zst:0xc0008037f0 tbz2:0xc0008037c0 tgz:0xc0008037d0 txz:0xc0008037e0 tzst:0xc0008037f0 xz:0xc000803810 zip:0xc000803820 zst:0xc000803818] Getters:map[file:0xc0022a1240 http:0xc000c0b400 https:0xc000c0b450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0120 12:24:55.223919  949656 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2124650334/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.55s)

                                                
                                    
x
+
TestErrorSpam/setup (41.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-087793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-087793 --driver=kvm2  --container-runtime=crio
E0120 11:29:37.408326  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.414779  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.426177  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.447527  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.488975  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.570514  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:37.732103  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:38.053855  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:38.695739  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:39.977379  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:42.540323  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:47.661822  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:29:57.903318  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-087793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-087793 --driver=kvm2  --container-runtime=crio: (41.841304854s)
--- PASS: TestErrorSpam/setup (41.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (4.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop: (1.660781201s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop: (1.263909571s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-087793 --log_dir /tmp/nospam-087793 stop: (1.504390424s)
--- PASS: TestErrorSpam/stop (4.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20151-942401/.minikube/files/etc/test/nested/copy/949656/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0120 11:30:18.385013  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:30:59.347247  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-473856 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.255710349s)
--- PASS: TestFunctional/serial/StartWithProxy (56.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 11:31:14.444993  949656 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-473856 --alsologtostderr -v=8: (39.95838693s)
functional_test.go:663: soft start took 39.959103321s for "functional-473856" cluster.
I0120 11:31:54.403830  949656 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (39.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-473856 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:3.1: (1.519874696s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:3.3: (1.677299041s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 cache add registry.k8s.io/pause:latest: (1.575776651s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-473856 /tmp/TestFunctionalserialCacheCmdcacheadd_local1536604195/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache add minikube-local-cache-test:functional-473856
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 cache add minikube-local-cache-test:functional-473856: (2.168785784s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache delete minikube-local-cache-test:functional-473856
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-473856
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.424361ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 cache reload: (1.411833213s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 kubectl -- --context functional-473856 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-473856 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0120 11:32:21.269579  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-473856 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.106824767s)
functional_test.go:761: restart took 30.106956604s for "functional-473856" cluster.
I0120 11:32:34.617329  949656 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (30.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-473856 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 logs: (1.209223716s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 logs --file /tmp/TestFunctionalserialLogsFileCmd1800426729/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 logs --file /tmp/TestFunctionalserialLogsFileCmd1800426729/001/logs.txt: (1.392812426s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-473856 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-473856
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-473856: exit status 115 (272.713908ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.214:32513 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-473856 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 config get cpus: exit status 14 (72.812093ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 config get cpus: exit status 14 (60.117253ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-473856 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-473856 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 958265: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-473856 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.981217ms)

                                                
                                                
-- stdout --
	* [functional-473856] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:32:51.671292  957346 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:32:51.671394  957346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:32:51.671403  957346 out.go:358] Setting ErrFile to fd 2...
	I0120 11:32:51.671408  957346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:32:51.671611  957346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:32:51.672099  957346 out.go:352] Setting JSON to false
	I0120 11:32:51.673147  957346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15315,"bootTime":1737357457,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:32:51.673258  957346 start.go:139] virtualization: kvm guest
	I0120 11:32:51.675587  957346 out.go:177] * [functional-473856] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 11:32:51.676889  957346 notify.go:220] Checking for updates...
	I0120 11:32:51.676896  957346 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:32:51.678320  957346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:32:51.679784  957346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:32:51.681149  957346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:32:51.682308  957346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 11:32:51.683377  957346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:32:51.685032  957346 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:32:51.685719  957346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:32:51.685774  957346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:32:51.703963  957346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0120 11:32:51.704358  957346 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:32:51.704927  957346 main.go:141] libmachine: Using API Version  1
	I0120 11:32:51.704957  957346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:32:51.705325  957346 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:32:51.705550  957346 main.go:141] libmachine: (functional-473856) Calling .DriverName
	I0120 11:32:51.705780  957346 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:32:51.706061  957346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:32:51.706104  957346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:32:51.720250  957346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0120 11:32:51.720691  957346 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:32:51.721182  957346 main.go:141] libmachine: Using API Version  1
	I0120 11:32:51.721200  957346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:32:51.721528  957346 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:32:51.721715  957346 main.go:141] libmachine: (functional-473856) Calling .DriverName
	I0120 11:32:51.753927  957346 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 11:32:51.755164  957346 start.go:297] selected driver: kvm2
	I0120 11:32:51.755181  957346 start.go:901] validating driver "kvm2" against &{Name:functional-473856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-473856 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:32:51.755311  957346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:32:51.757186  957346 out.go:201] 
	W0120 11:32:51.758279  957346 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 11:32:51.759425  957346 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-473856 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-473856 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.54477ms)

                                                
                                                
-- stdout --
	* [functional-473856] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:32:51.951257  957422 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:32:51.951346  957422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:32:51.951354  957422 out.go:358] Setting ErrFile to fd 2...
	I0120 11:32:51.951358  957422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:32:51.951585  957422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:32:51.952076  957422 out.go:352] Setting JSON to false
	I0120 11:32:51.953052  957422 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15315,"bootTime":1737357457,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 11:32:51.953153  957422 start.go:139] virtualization: kvm guest
	I0120 11:32:51.954971  957422 out.go:177] * [functional-473856] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 11:32:51.956762  957422 notify.go:220] Checking for updates...
	I0120 11:32:51.956787  957422 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:32:51.958086  957422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:32:51.959324  957422 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 11:32:51.960480  957422 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 11:32:51.961561  957422 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 11:32:51.962571  957422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:32:51.963899  957422 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:32:51.964280  957422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:32:51.964343  957422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:32:51.980756  957422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0120 11:32:51.981254  957422 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:32:51.981903  957422 main.go:141] libmachine: Using API Version  1
	I0120 11:32:51.981925  957422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:32:51.982340  957422 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:32:51.982538  957422 main.go:141] libmachine: (functional-473856) Calling .DriverName
	I0120 11:32:51.982794  957422 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:32:51.983067  957422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:32:51.983110  957422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:32:51.999422  957422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0120 11:32:51.999787  957422 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:32:52.000365  957422 main.go:141] libmachine: Using API Version  1
	I0120 11:32:52.000390  957422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:32:52.000670  957422 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:32:52.000909  957422 main.go:141] libmachine: (functional-473856) Calling .DriverName
	I0120 11:32:52.035200  957422 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 11:32:52.036329  957422 start.go:297] selected driver: kvm2
	I0120 11:32:52.036340  957422 start.go:901] validating driver "kvm2" against &{Name:functional-473856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-473856 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:32:52.036441  957422 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:32:52.038338  957422 out.go:201] 
	W0120 11:32:52.039508  957422 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 11:32:52.040607  957422 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-473856 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-473856 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-d7mzt" [f83ab863-daec-4c13-af43-deada1e4bc76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-d7mzt" [f83ab863-daec-4c13-af43-deada1e4bc76] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003780257s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.214:30158
functional_test.go:1675: http://192.168.39.214:30158: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-d7mzt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.214:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.214:30158
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [68b5c184-55b0-46fb-8e1e-571a1faeb832] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006947781s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-473856 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-473856 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-473856 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-473856 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cfd3e062-3ace-4b77-acef-c689b88ce699] Pending
helpers_test.go:344: "sp-pod" [cfd3e062-3ace-4b77-acef-c689b88ce699] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cfd3e062-3ace-4b77-acef-c689b88ce699] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.00486685s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-473856 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-473856 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-473856 delete -f testdata/storage-provisioner/pod.yaml: (4.963063499s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-473856 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6b245e5f-e76e-4492-a3f3-94a7e604bcec] Pending
helpers_test.go:344: "sp-pod" [6b245e5f-e76e-4492-a3f3-94a7e604bcec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6b245e5f-e76e-4492-a3f3-94a7e604bcec] Running
2025/01/20 11:33:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003542589s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-473856 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh -n functional-473856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cp functional-473856:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1831487280/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh -n functional-473856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh -n functional-473856 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-473856 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-7grjt" [43d1fd4a-a61f-45f2-ba30-e9f8cb31251d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-7grjt" [43d1fd4a-a61f-45f2-ba30-e9f8cb31251d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004986312s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-473856 exec mysql-58ccfd96bb-7grjt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-473856 exec mysql-58ccfd96bb-7grjt -- mysql -ppassword -e "show databases;": exit status 1 (156.685147ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 11:33:17.527176  949656 retry.go:31] will retry after 1.054790093s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-473856 exec mysql-58ccfd96bb-7grjt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/949656/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /etc/test/nested/copy/949656/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/949656.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /etc/ssl/certs/949656.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/949656.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /usr/share/ca-certificates/949656.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/9496562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /etc/ssl/certs/9496562.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/9496562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /usr/share/ca-certificates/9496562.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-473856 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "sudo systemctl is-active docker": exit status 1 (230.379348ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "sudo systemctl is-active containerd": exit status 1 (222.337679ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.165068503s)
--- PASS: TestFunctional/parallel/License (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-473856 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-473856 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-xsfnz" [0b37524a-a486-4ad9-a165-b5e1d01b6c84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-xsfnz" [0b37524a-a486-4ad9-a165-b5e1d01b6c84] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004680285s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-473856 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-proxy              | v1.32.0            | 040f9f8aac8cd | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.0            | a389e107f4ff1 | 70.6MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-473856  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-473856  | cd195f871f56f | 1.47MB |
| localhost/minikube-local-cache-test     | functional-473856  | f907e6ef21df6 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.0            | 8cab3d2a8bd0f | 90.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.0            | c2e17b8d0f4a3 | 98.1MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-473856 image ls --format table --alsologtostderr:
I0120 11:33:14.308434  958585 out.go:345] Setting OutFile to fd 1 ...
I0120 11:33:14.308579  958585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:14.308590  958585 out.go:358] Setting ErrFile to fd 2...
I0120 11:33:14.308597  958585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:14.308802  958585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
I0120 11:33:14.309431  958585 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:14.309556  958585 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:14.309915  958585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:14.309972  958585 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:14.324933  958585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
I0120 11:33:14.325530  958585 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:14.326157  958585 main.go:141] libmachine: Using API Version  1
I0120 11:33:14.326173  958585 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:14.326560  958585 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:14.326776  958585 main.go:141] libmachine: (functional-473856) Calling .GetState
I0120 11:33:14.328573  958585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:14.328612  958585 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:14.343042  958585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
I0120 11:33:14.343498  958585 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:14.344061  958585 main.go:141] libmachine: Using API Version  1
I0120 11:33:14.344081  958585 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:14.344478  958585 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:14.344698  958585 main.go:141] libmachine: (functional-473856) Calling .DriverName
I0120 11:33:14.344935  958585 ssh_runner.go:195] Run: systemctl --version
I0120 11:33:14.344972  958585 main.go:141] libmachine: (functional-473856) Calling .GetSSHHostname
I0120 11:33:14.347800  958585 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:14.348281  958585 main.go:141] libmachine: (functional-473856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:21:36", ip: ""} in network mk-functional-473856: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:32 +0000 UTC Type:0 Mac:52:54:00:9d:21:36 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-473856 Clientid:01:52:54:00:9d:21:36}
I0120 11:33:14.348315  958585 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined IP address 192.168.39.214 and MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:14.348480  958585 main.go:141] libmachine: (functional-473856) Calling .GetSSHPort
I0120 11:33:14.348653  958585 main.go:141] libmachine: (functional-473856) Calling .GetSSHKeyPath
I0120 11:33:14.348834  958585 main.go:141] libmachine: (functional-473856) Calling .GetSSHUsername
I0120 11:33:14.348986  958585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/functional-473856/id_rsa Username:docker}
I0120 11:33:14.483939  958585 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:33:14.570021  958585 main.go:141] libmachine: Making call to close driver server
I0120 11:33:14.570047  958585 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:14.570390  958585 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:14.570412  958585 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:14.570433  958585 main.go:141] libmachine: Making call to close driver server
I0120 11:33:14.570442  958585 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:14.570461  958585 main.go:141] libmachine: (functional-473856) DBG | Closing plugin on server side
I0120 11:33:14.570726  958585 main.go:141] libmachine: (functional-473856) DBG | Closing plugin on server side
I0120 11:33:14.570796  958585 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:14.570811  958585 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-473856 image ls --format json --alsologtostderr:
[{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-473856"],"size":"4943877"},{"i
d":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec","registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-schedu
ler:v1.32.0"],"size":"70649156"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"3147
0524"},{"id":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b","registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"98051552"},{"id":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac","registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"90789190"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314e
d010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4","registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31"],"repo
Tags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"95270297"},{"id":"728c237fda8d0c276c3e8033adcf0bdfecc4dd96b16970bf809ca5892d58ffb0","repoDigests":["docker.io/library/5d85b8f3da3ca585c796c5e8e655d9f022c2e6a3c22938113d099fd2b312ac23-tmp@sha256:d0f445182d90484425b71bff847296bf80dde61e5f2f154391956fa190a53723"],"repoTags":[],"size":"1466015"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"f907e6ef21df66dd41d5b578386f4e5319636d58588dbdc22d5980e820f818cb","repoDigests":["localhost/minikube-local-cache-test@sha256:85e264e05d0b31b6ca1c7c5bf88327c078448b7bb7344612e8b789418ae937dc"],"repoTags":["localhost/minikube-local-cache-test:functional-473856"],"size":"3330"},{"id":"cd195f871f56fee37eafcb7e628508ac4e589
5ce85fe770f7028f8fdbb72d929","repoDigests":["localhost/my-image@sha256:3e93838959eafa9f2b816f907b673c8052834e07540d46d26c6946541878de9c"],"repoTags":["localhost/my-image:functional-473856"],"size":"1468599"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-473856 image ls --format json --alsologtostderr:
I0120 11:33:13.860666  958562 out.go:345] Setting OutFile to fd 1 ...
I0120 11:33:13.860782  958562 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:13.860792  958562 out.go:358] Setting ErrFile to fd 2...
I0120 11:33:13.860796  958562 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:13.860967  958562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
I0120 11:33:13.861612  958562 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:13.861715  958562 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:13.862131  958562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:13.862196  958562 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:13.877468  958562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
I0120 11:33:13.878055  958562 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:13.878708  958562 main.go:141] libmachine: Using API Version  1
I0120 11:33:13.878736  958562 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:13.879107  958562 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:13.879332  958562 main.go:141] libmachine: (functional-473856) Calling .GetState
I0120 11:33:13.881234  958562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:13.881270  958562 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:13.898957  958562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
I0120 11:33:13.899482  958562 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:13.900060  958562 main.go:141] libmachine: Using API Version  1
I0120 11:33:13.900088  958562 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:13.900416  958562 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:13.900591  958562 main.go:141] libmachine: (functional-473856) Calling .DriverName
I0120 11:33:13.900761  958562 ssh_runner.go:195] Run: systemctl --version
I0120 11:33:13.900787  958562 main.go:141] libmachine: (functional-473856) Calling .GetSSHHostname
I0120 11:33:13.903642  958562 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:13.904045  958562 main.go:141] libmachine: (functional-473856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:21:36", ip: ""} in network mk-functional-473856: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:32 +0000 UTC Type:0 Mac:52:54:00:9d:21:36 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-473856 Clientid:01:52:54:00:9d:21:36}
I0120 11:33:13.904081  958562 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined IP address 192.168.39.214 and MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:13.904282  958562 main.go:141] libmachine: (functional-473856) Calling .GetSSHPort
I0120 11:33:13.904466  958562 main.go:141] libmachine: (functional-473856) Calling .GetSSHKeyPath
I0120 11:33:13.904629  958562 main.go:141] libmachine: (functional-473856) Calling .GetSSHUsername
I0120 11:33:13.904790  958562 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/functional-473856/id_rsa Username:docker}
I0120 11:33:14.039768  958562 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:33:14.254438  958562 main.go:141] libmachine: Making call to close driver server
I0120 11:33:14.254455  958562 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:14.254776  958562 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:14.254799  958562 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:14.254816  958562 main.go:141] libmachine: Making call to close driver server
I0120 11:33:14.254825  958562 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:14.255107  958562 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:14.255125  958562 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:14.255138  958562 main.go:141] libmachine: (functional-473856) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 image ls --format yaml --alsologtostderr: (1.240909446s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-473856 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
- registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "98051552"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-473856
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "90789190"
- id: 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
- registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "95270297"
- id: a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "70649156"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: f907e6ef21df66dd41d5b578386f4e5319636d58588dbdc22d5980e820f818cb
repoDigests:
- localhost/minikube-local-cache-test@sha256:85e264e05d0b31b6ca1c7c5bf88327c078448b7bb7344612e8b789418ae937dc
repoTags:
- localhost/minikube-local-cache-test:functional-473856
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-473856 image ls --format yaml --alsologtostderr:
I0120 11:33:08.204133  958433 out.go:345] Setting OutFile to fd 1 ...
I0120 11:33:08.204237  958433 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:08.204245  958433 out.go:358] Setting ErrFile to fd 2...
I0120 11:33:08.204249  958433 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:08.204435  958433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
I0120 11:33:08.205028  958433 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:08.205137  958433 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:08.205501  958433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:08.205554  958433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:08.220914  958433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
I0120 11:33:08.221450  958433 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:08.222134  958433 main.go:141] libmachine: Using API Version  1
I0120 11:33:08.222166  958433 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:08.222541  958433 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:08.222779  958433 main.go:141] libmachine: (functional-473856) Calling .GetState
I0120 11:33:08.224707  958433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:08.224781  958433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:08.239401  958433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38739
I0120 11:33:08.239963  958433 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:08.240546  958433 main.go:141] libmachine: Using API Version  1
I0120 11:33:08.240593  958433 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:08.240995  958433 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:08.241206  958433 main.go:141] libmachine: (functional-473856) Calling .DriverName
I0120 11:33:08.241440  958433 ssh_runner.go:195] Run: systemctl --version
I0120 11:33:08.241470  958433 main.go:141] libmachine: (functional-473856) Calling .GetSSHHostname
I0120 11:33:08.244452  958433 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:08.244875  958433 main.go:141] libmachine: (functional-473856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:21:36", ip: ""} in network mk-functional-473856: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:32 +0000 UTC Type:0 Mac:52:54:00:9d:21:36 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-473856 Clientid:01:52:54:00:9d:21:36}
I0120 11:33:08.244905  958433 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined IP address 192.168.39.214 and MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:08.245084  958433 main.go:141] libmachine: (functional-473856) Calling .GetSSHPort
I0120 11:33:08.245271  958433 main.go:141] libmachine: (functional-473856) Calling .GetSSHKeyPath
I0120 11:33:08.245426  958433 main.go:141] libmachine: (functional-473856) Calling .GetSSHUsername
I0120 11:33:08.245591  958433 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/functional-473856/id_rsa Username:docker}
I0120 11:33:08.345004  958433 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 11:33:09.392292  958433 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.0472486s)
I0120 11:33:09.393038  958433 main.go:141] libmachine: Making call to close driver server
I0120 11:33:09.393055  958433 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:09.393399  958433 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:09.393419  958433 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:09.393429  958433 main.go:141] libmachine: Making call to close driver server
I0120 11:33:09.393429  958433 main.go:141] libmachine: (functional-473856) DBG | Closing plugin on server side
I0120 11:33:09.393439  958433 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:09.393678  958433 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:09.393692  958433 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh pgrep buildkitd: exit status 1 (225.116787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image build -t localhost/my-image:functional-473856 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 image build -t localhost/my-image:functional-473856 testdata/build --alsologtostderr: (3.868823947s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-473856 image build -t localhost/my-image:functional-473856 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 728c237fda8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-473856
--> cd195f871f5
Successfully tagged localhost/my-image:functional-473856
cd195f871f56fee37eafcb7e628508ac4e5895ce85fe770f7028f8fdbb72d929
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-473856 image build -t localhost/my-image:functional-473856 testdata/build --alsologtostderr:
I0120 11:33:09.681086  958513 out.go:345] Setting OutFile to fd 1 ...
I0120 11:33:09.681210  958513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:09.681222  958513 out.go:358] Setting ErrFile to fd 2...
I0120 11:33:09.681227  958513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:33:09.681450  958513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
I0120 11:33:09.682097  958513 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:09.682738  958513 config.go:182] Loaded profile config "functional-473856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 11:33:09.683402  958513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:09.683457  958513 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:09.699356  958513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
I0120 11:33:09.699859  958513 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:09.700523  958513 main.go:141] libmachine: Using API Version  1
I0120 11:33:09.700558  958513 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:09.700929  958513 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:09.701176  958513 main.go:141] libmachine: (functional-473856) Calling .GetState
I0120 11:33:09.703184  958513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 11:33:09.703240  958513 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 11:33:09.717730  958513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
I0120 11:33:09.718211  958513 main.go:141] libmachine: () Calling .GetVersion
I0120 11:33:09.718725  958513 main.go:141] libmachine: Using API Version  1
I0120 11:33:09.718741  958513 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 11:33:09.719050  958513 main.go:141] libmachine: () Calling .GetMachineName
I0120 11:33:09.719231  958513 main.go:141] libmachine: (functional-473856) Calling .DriverName
I0120 11:33:09.719431  958513 ssh_runner.go:195] Run: systemctl --version
I0120 11:33:09.719465  958513 main.go:141] libmachine: (functional-473856) Calling .GetSSHHostname
I0120 11:33:09.722478  958513 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:09.722982  958513 main.go:141] libmachine: (functional-473856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:21:36", ip: ""} in network mk-functional-473856: {Iface:virbr1 ExpiryTime:2025-01-20 12:30:32 +0000 UTC Type:0 Mac:52:54:00:9d:21:36 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:functional-473856 Clientid:01:52:54:00:9d:21:36}
I0120 11:33:09.723019  958513 main.go:141] libmachine: (functional-473856) DBG | domain functional-473856 has defined IP address 192.168.39.214 and MAC address 52:54:00:9d:21:36 in network mk-functional-473856
I0120 11:33:09.723214  958513 main.go:141] libmachine: (functional-473856) Calling .GetSSHPort
I0120 11:33:09.723399  958513 main.go:141] libmachine: (functional-473856) Calling .GetSSHKeyPath
I0120 11:33:09.723570  958513 main.go:141] libmachine: (functional-473856) Calling .GetSSHUsername
I0120 11:33:09.723748  958513 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/functional-473856/id_rsa Username:docker}
I0120 11:33:09.804506  958513 build_images.go:161] Building image from path: /tmp/build.3035386460.tar
I0120 11:33:09.804601  958513 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 11:33:09.814614  958513 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3035386460.tar
I0120 11:33:09.818560  958513 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3035386460.tar: stat -c "%s %y" /var/lib/minikube/build/build.3035386460.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3035386460.tar': No such file or directory
I0120 11:33:09.818594  958513 ssh_runner.go:362] scp /tmp/build.3035386460.tar --> /var/lib/minikube/build/build.3035386460.tar (3072 bytes)
I0120 11:33:09.842959  958513 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3035386460
I0120 11:33:09.857213  958513 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3035386460 -xf /var/lib/minikube/build/build.3035386460.tar
I0120 11:33:09.866422  958513 crio.go:315] Building image: /var/lib/minikube/build/build.3035386460
I0120 11:33:09.866505  958513 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-473856 /var/lib/minikube/build/build.3035386460 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0120 11:33:13.432495  958513 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-473856 /var/lib/minikube/build/build.3035386460 --cgroup-manager=cgroupfs: (3.565951651s)
I0120 11:33:13.432588  958513 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3035386460
I0120 11:33:13.453836  958513 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3035386460.tar
I0120 11:33:13.487186  958513 build_images.go:217] Built localhost/my-image:functional-473856 from /tmp/build.3035386460.tar
I0120 11:33:13.487224  958513 build_images.go:133] succeeded building to: functional-473856
I0120 11:33:13.487231  958513 build_images.go:134] failed building to: 
I0120 11:33:13.487257  958513 main.go:141] libmachine: Making call to close driver server
I0120 11:33:13.487270  958513 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:13.487612  958513 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:13.487634  958513 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 11:33:13.487647  958513 main.go:141] libmachine: Making call to close driver server
I0120 11:33:13.487656  958513 main.go:141] libmachine: (functional-473856) Calling .Close
I0120 11:33:13.487906  958513 main.go:141] libmachine: Successfully made call to close driver server
I0120 11:33:13.487921  958513 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7393535s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-473856
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "474.091541ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "66.775453ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdany-port3455035738/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737372763457037865" to /tmp/TestFunctionalparallelMountCmdany-port3455035738/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737372763457037865" to /tmp/TestFunctionalparallelMountCmdany-port3455035738/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737372763457037865" to /tmp/TestFunctionalparallelMountCmdany-port3455035738/001/test-1737372763457037865
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.144666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:32:43.747567  949656 retry.go:31] will retry after 521.99136ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 11:32 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 11:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 11:32 test-1737372763457037865
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh cat /mount-9p/test-1737372763457037865
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-473856 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [35860cfb-8b32-49e8-9e42-068faf91448a] Pending
helpers_test.go:344: "busybox-mount" [35860cfb-8b32-49e8-9e42-068faf91448a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [35860cfb-8b32-49e8-9e42-068faf91448a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [35860cfb-8b32-49e8-9e42-068faf91448a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004388947s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-473856 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdany-port3455035738/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "492.932988ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.455263ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image load --daemon kicbase/echo-server:functional-473856 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-473856 image load --daemon kicbase/echo-server:functional-473856 --alsologtostderr: (2.036868841s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image load --daemon kicbase/echo-server:functional-473856 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-473856
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image load --daemon kicbase/echo-server:functional-473856 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image save kicbase/echo-server:functional-473856 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image rm kicbase/echo-server:functional-473856 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-473856
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 image save --daemon kicbase/echo-server:functional-473856 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-473856
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdspecific-port3896099149/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.921271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:32:52.369622  949656 retry.go:31] will retry after 413.081503ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdspecific-port3896099149/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "sudo umount -f /mount-9p": exit status 1 (241.586707ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-473856 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdspecific-port3896099149/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service list -o json
functional_test.go:1494: Took "333.54062ms" to run "out/minikube-linux-amd64 -p functional-473856 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.214:31937
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T" /mount1: exit status 1 (374.435677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:32:54.347892  949656 retry.go:31] will retry after 732.763858ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-473856 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-473856 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4272785193/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-473856 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.214:31937
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-473856
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-473856
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-473856
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274516 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 11:34:37.398970  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:35:05.111162  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-274516 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.805763318s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-274516 -- rollout status deployment/busybox: (4.910326784s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-cs2xs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-hptmw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-mwbc9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-cs2xs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-hptmw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-mwbc9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-cs2xs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-hptmw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-mwbc9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-cs2xs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-cs2xs -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-hptmw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-hptmw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-mwbc9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-274516 -- exec busybox-58667487b6-mwbc9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-274516 -v=7 --alsologtostderr
E0120 11:37:41.308141  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.314585  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.325932  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.347282  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.388643  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.470899  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.633143  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:41.955417  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:42.596739  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:37:43.878568  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-274516 -v=7 --alsologtostderr: (57.682438699s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-274516 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0120 11:37:46.440325  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp testdata/cp-test.txt ha-274516:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4264550074/001/cp-test_ha-274516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516:/home/docker/cp-test.txt ha-274516-m02:/home/docker/cp-test_ha-274516_ha-274516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test_ha-274516_ha-274516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516:/home/docker/cp-test.txt ha-274516-m03:/home/docker/cp-test_ha-274516_ha-274516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test_ha-274516_ha-274516-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516:/home/docker/cp-test.txt ha-274516-m04:/home/docker/cp-test_ha-274516_ha-274516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test_ha-274516_ha-274516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp testdata/cp-test.txt ha-274516-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4264550074/001/cp-test_ha-274516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test.txt"
E0120 11:37:51.562037  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m02:/home/docker/cp-test.txt ha-274516:/home/docker/cp-test_ha-274516-m02_ha-274516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test_ha-274516-m02_ha-274516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m02:/home/docker/cp-test.txt ha-274516-m03:/home/docker/cp-test_ha-274516-m02_ha-274516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test_ha-274516-m02_ha-274516-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m02:/home/docker/cp-test.txt ha-274516-m04:/home/docker/cp-test_ha-274516-m02_ha-274516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test_ha-274516-m02_ha-274516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp testdata/cp-test.txt ha-274516-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4264550074/001/cp-test_ha-274516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m03:/home/docker/cp-test.txt ha-274516:/home/docker/cp-test_ha-274516-m03_ha-274516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test_ha-274516-m03_ha-274516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m03:/home/docker/cp-test.txt ha-274516-m02:/home/docker/cp-test_ha-274516-m03_ha-274516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test_ha-274516-m03_ha-274516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m03:/home/docker/cp-test.txt ha-274516-m04:/home/docker/cp-test_ha-274516-m03_ha-274516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test_ha-274516-m03_ha-274516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp testdata/cp-test.txt ha-274516-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4264550074/001/cp-test_ha-274516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m04:/home/docker/cp-test.txt ha-274516:/home/docker/cp-test_ha-274516-m04_ha-274516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516 "sudo cat /home/docker/cp-test_ha-274516-m04_ha-274516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m04:/home/docker/cp-test.txt ha-274516-m02:/home/docker/cp-test_ha-274516-m04_ha-274516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m02 "sudo cat /home/docker/cp-test_ha-274516-m04_ha-274516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 cp ha-274516-m04:/home/docker/cp-test.txt ha-274516-m03:/home/docker/cp-test_ha-274516-m04_ha-274516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 ssh -n ha-274516-m03 "sudo cat /home/docker/cp-test_ha-274516-m04_ha-274516-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 node stop m02 -v=7 --alsologtostderr
E0120 11:38:01.803770  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:38:22.285264  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:39:03.247156  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-274516 node stop m02 -v=7 --alsologtostderr: (1m30.768376729s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr: exit status 7 (651.895421ms)

                                                
                                                
-- stdout --
	ha-274516
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274516-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-274516-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-274516-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:39:30.532299  963316 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:39:30.532426  963316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:39:30.532436  963316 out.go:358] Setting ErrFile to fd 2...
	I0120 11:39:30.532444  963316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:39:30.532605  963316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:39:30.532802  963316 out.go:352] Setting JSON to false
	I0120 11:39:30.532848  963316 mustload.go:65] Loading cluster: ha-274516
	I0120 11:39:30.532942  963316 notify.go:220] Checking for updates...
	I0120 11:39:30.533496  963316 config.go:182] Loaded profile config "ha-274516": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:39:30.533536  963316 status.go:174] checking status of ha-274516 ...
	I0120 11:39:30.534114  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.534172  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.556372  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0120 11:39:30.556774  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.557397  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.557422  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.557836  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.558062  963316 main.go:141] libmachine: (ha-274516) Calling .GetState
	I0120 11:39:30.559792  963316 status.go:371] ha-274516 host status = "Running" (err=<nil>)
	I0120 11:39:30.559812  963316 host.go:66] Checking if "ha-274516" exists ...
	I0120 11:39:30.560119  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.560170  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.575129  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0120 11:39:30.575666  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.576257  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.576296  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.576711  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.576929  963316 main.go:141] libmachine: (ha-274516) Calling .GetIP
	I0120 11:39:30.580475  963316 main.go:141] libmachine: (ha-274516) DBG | domain ha-274516 has defined MAC address 52:54:00:af:a8:ec in network mk-ha-274516
	I0120 11:39:30.581150  963316 main.go:141] libmachine: (ha-274516) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a8:ec", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:33:43 +0000 UTC Type:0 Mac:52:54:00:af:a8:ec Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-274516 Clientid:01:52:54:00:af:a8:ec}
	I0120 11:39:30.581179  963316 main.go:141] libmachine: (ha-274516) DBG | domain ha-274516 has defined IP address 192.168.39.99 and MAC address 52:54:00:af:a8:ec in network mk-ha-274516
	I0120 11:39:30.581346  963316 host.go:66] Checking if "ha-274516" exists ...
	I0120 11:39:30.581649  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.581707  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.598795  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0120 11:39:30.599385  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.599869  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.599894  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.600304  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.600573  963316 main.go:141] libmachine: (ha-274516) Calling .DriverName
	I0120 11:39:30.600821  963316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:39:30.600875  963316 main.go:141] libmachine: (ha-274516) Calling .GetSSHHostname
	I0120 11:39:30.604295  963316 main.go:141] libmachine: (ha-274516) DBG | domain ha-274516 has defined MAC address 52:54:00:af:a8:ec in network mk-ha-274516
	I0120 11:39:30.604687  963316 main.go:141] libmachine: (ha-274516) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a8:ec", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:33:43 +0000 UTC Type:0 Mac:52:54:00:af:a8:ec Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-274516 Clientid:01:52:54:00:af:a8:ec}
	I0120 11:39:30.604714  963316 main.go:141] libmachine: (ha-274516) DBG | domain ha-274516 has defined IP address 192.168.39.99 and MAC address 52:54:00:af:a8:ec in network mk-ha-274516
	I0120 11:39:30.604851  963316 main.go:141] libmachine: (ha-274516) Calling .GetSSHPort
	I0120 11:39:30.605015  963316 main.go:141] libmachine: (ha-274516) Calling .GetSSHKeyPath
	I0120 11:39:30.605170  963316 main.go:141] libmachine: (ha-274516) Calling .GetSSHUsername
	I0120 11:39:30.605316  963316 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/ha-274516/id_rsa Username:docker}
	I0120 11:39:30.694671  963316 ssh_runner.go:195] Run: systemctl --version
	I0120 11:39:30.701595  963316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:39:30.717901  963316 kubeconfig.go:125] found "ha-274516" server: "https://192.168.39.254:8443"
	I0120 11:39:30.717982  963316 api_server.go:166] Checking apiserver status ...
	I0120 11:39:30.718026  963316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:39:30.734283  963316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup
	W0120 11:39:30.744658  963316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 11:39:30.744703  963316 ssh_runner.go:195] Run: ls
	I0120 11:39:30.748356  963316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 11:39:30.754094  963316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 11:39:30.754115  963316 status.go:463] ha-274516 apiserver status = Running (err=<nil>)
	I0120 11:39:30.754124  963316 status.go:176] ha-274516 status: &{Name:ha-274516 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:39:30.754144  963316 status.go:174] checking status of ha-274516-m02 ...
	I0120 11:39:30.754421  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.754456  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.770442  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I0120 11:39:30.770896  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.771476  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.771500  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.771816  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.772113  963316 main.go:141] libmachine: (ha-274516-m02) Calling .GetState
	I0120 11:39:30.773863  963316 status.go:371] ha-274516-m02 host status = "Stopped" (err=<nil>)
	I0120 11:39:30.773876  963316 status.go:384] host is not running, skipping remaining checks
	I0120 11:39:30.773881  963316 status.go:176] ha-274516-m02 status: &{Name:ha-274516-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:39:30.773898  963316 status.go:174] checking status of ha-274516-m03 ...
	I0120 11:39:30.774279  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.774338  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.790371  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0120 11:39:30.790826  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.791300  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.791323  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.791647  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.791815  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetState
	I0120 11:39:30.793311  963316 status.go:371] ha-274516-m03 host status = "Running" (err=<nil>)
	I0120 11:39:30.793330  963316 host.go:66] Checking if "ha-274516-m03" exists ...
	I0120 11:39:30.793608  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.793652  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.808825  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0120 11:39:30.809227  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.809707  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.809733  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.810078  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.810270  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetIP
	I0120 11:39:30.813259  963316 main.go:141] libmachine: (ha-274516-m03) DBG | domain ha-274516-m03 has defined MAC address 52:54:00:7d:48:87 in network mk-ha-274516
	I0120 11:39:30.813756  963316 main.go:141] libmachine: (ha-274516-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:48:87", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:35:35 +0000 UTC Type:0 Mac:52:54:00:7d:48:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-274516-m03 Clientid:01:52:54:00:7d:48:87}
	I0120 11:39:30.813784  963316 main.go:141] libmachine: (ha-274516-m03) DBG | domain ha-274516-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:7d:48:87 in network mk-ha-274516
	I0120 11:39:30.814073  963316 host.go:66] Checking if "ha-274516-m03" exists ...
	I0120 11:39:30.814380  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.814423  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.830290  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0120 11:39:30.830732  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.831186  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.831214  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.831617  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.831870  963316 main.go:141] libmachine: (ha-274516-m03) Calling .DriverName
	I0120 11:39:30.832066  963316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:39:30.832103  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetSSHHostname
	I0120 11:39:30.835052  963316 main.go:141] libmachine: (ha-274516-m03) DBG | domain ha-274516-m03 has defined MAC address 52:54:00:7d:48:87 in network mk-ha-274516
	I0120 11:39:30.835547  963316 main.go:141] libmachine: (ha-274516-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:48:87", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:35:35 +0000 UTC Type:0 Mac:52:54:00:7d:48:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-274516-m03 Clientid:01:52:54:00:7d:48:87}
	I0120 11:39:30.835577  963316 main.go:141] libmachine: (ha-274516-m03) DBG | domain ha-274516-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:7d:48:87 in network mk-ha-274516
	I0120 11:39:30.835762  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetSSHPort
	I0120 11:39:30.835945  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetSSHKeyPath
	I0120 11:39:30.836108  963316 main.go:141] libmachine: (ha-274516-m03) Calling .GetSSHUsername
	I0120 11:39:30.836221  963316 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/ha-274516-m03/id_rsa Username:docker}
	I0120 11:39:30.927967  963316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:39:30.943567  963316 kubeconfig.go:125] found "ha-274516" server: "https://192.168.39.254:8443"
	I0120 11:39:30.943599  963316 api_server.go:166] Checking apiserver status ...
	I0120 11:39:30.943631  963316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:39:30.957543  963316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	W0120 11:39:30.966626  963316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 11:39:30.966664  963316 ssh_runner.go:195] Run: ls
	I0120 11:39:30.970547  963316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 11:39:30.976473  963316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 11:39:30.976494  963316 status.go:463] ha-274516-m03 apiserver status = Running (err=<nil>)
	I0120 11:39:30.976502  963316 status.go:176] ha-274516-m03 status: &{Name:ha-274516-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:39:30.976519  963316 status.go:174] checking status of ha-274516-m04 ...
	I0120 11:39:30.976822  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.976861  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:30.992742  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0120 11:39:30.993111  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:30.993671  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:30.993693  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:30.993985  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:30.994220  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetState
	I0120 11:39:30.995737  963316 status.go:371] ha-274516-m04 host status = "Running" (err=<nil>)
	I0120 11:39:30.995751  963316 host.go:66] Checking if "ha-274516-m04" exists ...
	I0120 11:39:30.996009  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:30.996045  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:31.011549  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0120 11:39:31.011897  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:31.012288  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:31.012306  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:31.012570  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:31.012791  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetIP
	I0120 11:39:31.015626  963316 main.go:141] libmachine: (ha-274516-m04) DBG | domain ha-274516-m04 has defined MAC address 52:54:00:67:36:90 in network mk-ha-274516
	I0120 11:39:31.016039  963316 main.go:141] libmachine: (ha-274516-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:36:90", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:37:03 +0000 UTC Type:0 Mac:52:54:00:67:36:90 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-274516-m04 Clientid:01:52:54:00:67:36:90}
	I0120 11:39:31.016070  963316 main.go:141] libmachine: (ha-274516-m04) DBG | domain ha-274516-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:67:36:90 in network mk-ha-274516
	I0120 11:39:31.016232  963316 host.go:66] Checking if "ha-274516-m04" exists ...
	I0120 11:39:31.016608  963316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:39:31.016656  963316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:39:31.031577  963316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0120 11:39:31.031979  963316 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:39:31.032371  963316 main.go:141] libmachine: Using API Version  1
	I0120 11:39:31.032391  963316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:39:31.032727  963316 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:39:31.032927  963316 main.go:141] libmachine: (ha-274516-m04) Calling .DriverName
	I0120 11:39:31.033105  963316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:39:31.033134  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetSSHHostname
	I0120 11:39:31.035518  963316 main.go:141] libmachine: (ha-274516-m04) DBG | domain ha-274516-m04 has defined MAC address 52:54:00:67:36:90 in network mk-ha-274516
	I0120 11:39:31.035904  963316 main.go:141] libmachine: (ha-274516-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:36:90", ip: ""} in network mk-ha-274516: {Iface:virbr1 ExpiryTime:2025-01-20 12:37:03 +0000 UTC Type:0 Mac:52:54:00:67:36:90 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-274516-m04 Clientid:01:52:54:00:67:36:90}
	I0120 11:39:31.035931  963316 main.go:141] libmachine: (ha-274516-m04) DBG | domain ha-274516-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:67:36:90 in network mk-ha-274516
	I0120 11:39:31.036045  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetSSHPort
	I0120 11:39:31.036249  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetSSHKeyPath
	I0120 11:39:31.036384  963316 main.go:141] libmachine: (ha-274516-m04) Calling .GetSSHUsername
	I0120 11:39:31.036523  963316 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/ha-274516-m04/id_rsa Username:docker}
	I0120 11:39:31.114616  963316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:39:31.132751  963316 status.go:176] ha-274516-m04 status: &{Name:ha-274516-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 node start m02 -v=7 --alsologtostderr
E0120 11:39:37.402426  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-274516 node start m02 -v=7 --alsologtostderr: (51.759074091s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0120 11:40:25.168775  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-274516 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-274516 -v=7 --alsologtostderr
E0120 11:42:41.308627  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:43:09.010393  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:44:37.399656  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-274516 -v=7 --alsologtostderr: (4m34.200056792s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274516 --wait=true -v=7 --alsologtostderr
E0120 11:46:00.474766  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:47:41.307876  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-274516 --wait=true -v=7 --alsologtostderr: (3m9.35851902s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-274516
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-274516 node delete m03 -v=7 --alsologtostderr: (17.333398838s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 stop -v=7 --alsologtostderr
E0120 11:49:37.399853  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:52:41.307900  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-274516 stop -v=7 --alsologtostderr: (4m32.800422738s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr: exit status 7 (112.802554ms)

                                                
                                                
-- stdout --
	ha-274516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-274516-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-274516-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:53:00.532662  967663 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:53:00.532800  967663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:53:00.532810  967663 out.go:358] Setting ErrFile to fd 2...
	I0120 11:53:00.532815  967663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:53:00.532969  967663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 11:53:00.533137  967663 out.go:352] Setting JSON to false
	I0120 11:53:00.533170  967663 mustload.go:65] Loading cluster: ha-274516
	I0120 11:53:00.533195  967663 notify.go:220] Checking for updates...
	I0120 11:53:00.533572  967663 config.go:182] Loaded profile config "ha-274516": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 11:53:00.533598  967663 status.go:174] checking status of ha-274516 ...
	I0120 11:53:00.534011  967663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:53:00.534047  967663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:53:00.555486  967663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I0120 11:53:00.556029  967663 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:53:00.556719  967663 main.go:141] libmachine: Using API Version  1
	I0120 11:53:00.556750  967663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:53:00.557104  967663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:53:00.557310  967663 main.go:141] libmachine: (ha-274516) Calling .GetState
	I0120 11:53:00.559079  967663 status.go:371] ha-274516 host status = "Stopped" (err=<nil>)
	I0120 11:53:00.559092  967663 status.go:384] host is not running, skipping remaining checks
	I0120 11:53:00.559098  967663 status.go:176] ha-274516 status: &{Name:ha-274516 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:53:00.559135  967663 status.go:174] checking status of ha-274516-m02 ...
	I0120 11:53:00.559429  967663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:53:00.559467  967663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:53:00.573806  967663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I0120 11:53:00.574293  967663 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:53:00.574765  967663 main.go:141] libmachine: Using API Version  1
	I0120 11:53:00.574790  967663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:53:00.575108  967663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:53:00.575295  967663 main.go:141] libmachine: (ha-274516-m02) Calling .GetState
	I0120 11:53:00.576683  967663 status.go:371] ha-274516-m02 host status = "Stopped" (err=<nil>)
	I0120 11:53:00.576695  967663 status.go:384] host is not running, skipping remaining checks
	I0120 11:53:00.576700  967663 status.go:176] ha-274516-m02 status: &{Name:ha-274516-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:53:00.576714  967663 status.go:174] checking status of ha-274516-m04 ...
	I0120 11:53:00.576973  967663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 11:53:00.577013  967663 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 11:53:00.591329  967663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I0120 11:53:00.591676  967663 main.go:141] libmachine: () Calling .GetVersion
	I0120 11:53:00.592057  967663 main.go:141] libmachine: Using API Version  1
	I0120 11:53:00.592079  967663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 11:53:00.592384  967663 main.go:141] libmachine: () Calling .GetMachineName
	I0120 11:53:00.592563  967663 main.go:141] libmachine: (ha-274516-m04) Calling .GetState
	I0120 11:53:00.593842  967663 status.go:371] ha-274516-m04 host status = "Stopped" (err=<nil>)
	I0120 11:53:00.593857  967663 status.go:384] host is not running, skipping remaining checks
	I0120 11:53:00.593863  967663 status.go:176] ha-274516-m04 status: &{Name:ha-274516-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-274516 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 11:54:04.372854  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:54:37.399861  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-274516 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.847046959s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-274516 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-274516 --control-plane -v=7 --alsologtostderr: (1m12.799619263s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-274516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-203643 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-203643 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.443094006s)
--- PASS: TestJSONOutput/start/Command (53.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-203643 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-203643 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-203643 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-203643 --output=json --user=testUser: (7.320363346s)
--- PASS: TestJSONOutput/stop/Command (7.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-109633 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-109633 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.855839ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39da1292-6bde-475a-a6fc-4e659a0b5d92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-109633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc3e2a30-0517-4452-ba87-b1a0a8c1898f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20151"}}
	{"specversion":"1.0","id":"f61b9740-dc34-442e-bee9-fd7c0e3e9db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42b2766b-2645-46e3-ba69-3bcbf05ad3d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig"}}
	{"specversion":"1.0","id":"e653510e-cc6e-46d6-a742-a73d29aba049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube"}}
	{"specversion":"1.0","id":"782d3bc9-1208-4b7d-bc21-bc811c4cf7d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"635d4af1-f189-4e56-841f-6a7eb82e6fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77cef5ac-2ad9-4580-b1cb-88a1c375e4c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-109633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-109633
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (82.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-392695 --driver=kvm2  --container-runtime=crio
E0120 11:57:41.310691  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-392695 --driver=kvm2  --container-runtime=crio: (39.409285265s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-410193 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-410193 --driver=kvm2  --container-runtime=crio: (40.078982915s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-392695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-410193
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-410193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-410193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-410193: (1.019193082s)
helpers_test.go:175: Cleaning up "first-392695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-392695
--- PASS: TestMinikubeProfile (82.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-988381 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-988381 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.046645213s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-988381 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-988381 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.497811547s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- mount | grep 9p
E0120 11:59:37.399749  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-988381 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-004625
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-004625: (1.278101465s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004625
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004625: (20.753160963s)
--- PASS: TestMountStart/serial/RestartStopped (21.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222827 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222827 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.316028263s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-222827 -- rollout status deployment/busybox: (3.935141s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-fkbtq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-ps8v9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-fkbtq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-ps8v9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-fkbtq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-ps8v9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-fkbtq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-fkbtq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-ps8v9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-222827 -- exec busybox-58667487b6-ps8v9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-222827 -v 3 --alsologtostderr
E0120 12:02:40.477522  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:02:41.308011  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-222827 -v 3 --alsologtostderr: (46.031004643s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-222827 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp testdata/cp-test.txt multinode-222827:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile842699770/001/cp-test_multinode-222827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827:/home/docker/cp-test.txt multinode-222827-m02:/home/docker/cp-test_multinode-222827_multinode-222827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test_multinode-222827_multinode-222827-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827:/home/docker/cp-test.txt multinode-222827-m03:/home/docker/cp-test_multinode-222827_multinode-222827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test_multinode-222827_multinode-222827-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp testdata/cp-test.txt multinode-222827-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile842699770/001/cp-test_multinode-222827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m02:/home/docker/cp-test.txt multinode-222827:/home/docker/cp-test_multinode-222827-m02_multinode-222827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test_multinode-222827-m02_multinode-222827.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m02:/home/docker/cp-test.txt multinode-222827-m03:/home/docker/cp-test_multinode-222827-m02_multinode-222827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test_multinode-222827-m02_multinode-222827-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp testdata/cp-test.txt multinode-222827-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile842699770/001/cp-test_multinode-222827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m03:/home/docker/cp-test.txt multinode-222827:/home/docker/cp-test_multinode-222827-m03_multinode-222827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827 "sudo cat /home/docker/cp-test_multinode-222827-m03_multinode-222827.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 cp multinode-222827-m03:/home/docker/cp-test.txt multinode-222827-m02:/home/docker/cp-test_multinode-222827-m03_multinode-222827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 ssh -n multinode-222827-m02 "sudo cat /home/docker/cp-test_multinode-222827-m03_multinode-222827-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-222827 node stop m03: (1.456558991s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222827 status: exit status 7 (444.075366ms)

                                                
                                                
-- stdout --
	multinode-222827
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-222827-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-222827-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr: exit status 7 (432.032495ms)

                                                
                                                
-- stdout --
	multinode-222827
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-222827-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-222827-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:02:55.398356  975257 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:02:55.398461  975257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:02:55.398470  975257 out.go:358] Setting ErrFile to fd 2...
	I0120 12:02:55.398474  975257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:02:55.398703  975257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:02:55.398880  975257 out.go:352] Setting JSON to false
	I0120 12:02:55.398910  975257 mustload.go:65] Loading cluster: multinode-222827
	I0120 12:02:55.399014  975257 notify.go:220] Checking for updates...
	I0120 12:02:55.399287  975257 config.go:182] Loaded profile config "multinode-222827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:02:55.399308  975257 status.go:174] checking status of multinode-222827 ...
	I0120 12:02:55.399692  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.399733  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.415738  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0120 12:02:55.416112  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.416756  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.416777  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.417281  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.417542  975257 main.go:141] libmachine: (multinode-222827) Calling .GetState
	I0120 12:02:55.419208  975257 status.go:371] multinode-222827 host status = "Running" (err=<nil>)
	I0120 12:02:55.419228  975257 host.go:66] Checking if "multinode-222827" exists ...
	I0120 12:02:55.419685  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.419734  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.436187  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I0120 12:02:55.436606  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.437174  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.437217  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.437570  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.437793  975257 main.go:141] libmachine: (multinode-222827) Calling .GetIP
	I0120 12:02:55.440998  975257 main.go:141] libmachine: (multinode-222827) DBG | domain multinode-222827 has defined MAC address 52:54:00:5d:17:0e in network mk-multinode-222827
	I0120 12:02:55.441460  975257 main.go:141] libmachine: (multinode-222827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:17:0e", ip: ""} in network mk-multinode-222827: {Iface:virbr1 ExpiryTime:2025-01-20 13:00:17 +0000 UTC Type:0 Mac:52:54:00:5d:17:0e Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-222827 Clientid:01:52:54:00:5d:17:0e}
	I0120 12:02:55.441477  975257 main.go:141] libmachine: (multinode-222827) DBG | domain multinode-222827 has defined IP address 192.168.39.140 and MAC address 52:54:00:5d:17:0e in network mk-multinode-222827
	I0120 12:02:55.441630  975257 host.go:66] Checking if "multinode-222827" exists ...
	I0120 12:02:55.441910  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.441954  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.456392  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0120 12:02:55.456797  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.457278  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.457319  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.457711  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.457873  975257 main.go:141] libmachine: (multinode-222827) Calling .DriverName
	I0120 12:02:55.458068  975257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:02:55.458104  975257 main.go:141] libmachine: (multinode-222827) Calling .GetSSHHostname
	I0120 12:02:55.460920  975257 main.go:141] libmachine: (multinode-222827) DBG | domain multinode-222827 has defined MAC address 52:54:00:5d:17:0e in network mk-multinode-222827
	I0120 12:02:55.461281  975257 main.go:141] libmachine: (multinode-222827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:17:0e", ip: ""} in network mk-multinode-222827: {Iface:virbr1 ExpiryTime:2025-01-20 13:00:17 +0000 UTC Type:0 Mac:52:54:00:5d:17:0e Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-222827 Clientid:01:52:54:00:5d:17:0e}
	I0120 12:02:55.461310  975257 main.go:141] libmachine: (multinode-222827) DBG | domain multinode-222827 has defined IP address 192.168.39.140 and MAC address 52:54:00:5d:17:0e in network mk-multinode-222827
	I0120 12:02:55.461807  975257 main.go:141] libmachine: (multinode-222827) Calling .GetSSHPort
	I0120 12:02:55.461989  975257 main.go:141] libmachine: (multinode-222827) Calling .GetSSHKeyPath
	I0120 12:02:55.462161  975257 main.go:141] libmachine: (multinode-222827) Calling .GetSSHUsername
	I0120 12:02:55.462306  975257 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/multinode-222827/id_rsa Username:docker}
	I0120 12:02:55.549192  975257 ssh_runner.go:195] Run: systemctl --version
	I0120 12:02:55.555980  975257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:02:55.569505  975257 kubeconfig.go:125] found "multinode-222827" server: "https://192.168.39.140:8443"
	I0120 12:02:55.569535  975257 api_server.go:166] Checking apiserver status ...
	I0120 12:02:55.569565  975257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:02:55.583929  975257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0120 12:02:55.592881  975257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:02:55.592932  975257 ssh_runner.go:195] Run: ls
	I0120 12:02:55.597021  975257 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0120 12:02:55.601478  975257 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0120 12:02:55.601503  975257 status.go:463] multinode-222827 apiserver status = Running (err=<nil>)
	I0120 12:02:55.601516  975257 status.go:176] multinode-222827 status: &{Name:multinode-222827 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:02:55.601544  975257 status.go:174] checking status of multinode-222827-m02 ...
	I0120 12:02:55.601867  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.601920  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.617326  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0120 12:02:55.617743  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.618139  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.618158  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.618459  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.618688  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetState
	I0120 12:02:55.620174  975257 status.go:371] multinode-222827-m02 host status = "Running" (err=<nil>)
	I0120 12:02:55.620195  975257 host.go:66] Checking if "multinode-222827-m02" exists ...
	I0120 12:02:55.620589  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.620644  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.636327  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0120 12:02:55.636765  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.637204  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.637224  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.637533  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.637735  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetIP
	I0120 12:02:55.640208  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | domain multinode-222827-m02 has defined MAC address 52:54:00:ff:02:05 in network mk-multinode-222827
	I0120 12:02:55.640624  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:02:05", ip: ""} in network mk-multinode-222827: {Iface:virbr1 ExpiryTime:2025-01-20 13:01:13 +0000 UTC Type:0 Mac:52:54:00:ff:02:05 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-222827-m02 Clientid:01:52:54:00:ff:02:05}
	I0120 12:02:55.640652  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | domain multinode-222827-m02 has defined IP address 192.168.39.110 and MAC address 52:54:00:ff:02:05 in network mk-multinode-222827
	I0120 12:02:55.640790  975257 host.go:66] Checking if "multinode-222827-m02" exists ...
	I0120 12:02:55.641200  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.641247  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.655813  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0120 12:02:55.656176  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.656616  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.656639  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.656928  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.657125  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .DriverName
	I0120 12:02:55.657289  975257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:02:55.657315  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetSSHHostname
	I0120 12:02:55.660045  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | domain multinode-222827-m02 has defined MAC address 52:54:00:ff:02:05 in network mk-multinode-222827
	I0120 12:02:55.660414  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:02:05", ip: ""} in network mk-multinode-222827: {Iface:virbr1 ExpiryTime:2025-01-20 13:01:13 +0000 UTC Type:0 Mac:52:54:00:ff:02:05 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-222827-m02 Clientid:01:52:54:00:ff:02:05}
	I0120 12:02:55.660446  975257 main.go:141] libmachine: (multinode-222827-m02) DBG | domain multinode-222827-m02 has defined IP address 192.168.39.110 and MAC address 52:54:00:ff:02:05 in network mk-multinode-222827
	I0120 12:02:55.660552  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetSSHPort
	I0120 12:02:55.660717  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetSSHKeyPath
	I0120 12:02:55.660861  975257 main.go:141] libmachine: (multinode-222827-m02) Calling .GetSSHUsername
	I0120 12:02:55.660953  975257 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-942401/.minikube/machines/multinode-222827-m02/id_rsa Username:docker}
	I0120 12:02:55.745968  975257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:02:55.759692  975257 status.go:176] multinode-222827-m02 status: &{Name:multinode-222827-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:02:55.759915  975257 status.go:174] checking status of multinode-222827-m03 ...
	I0120 12:02:55.760326  975257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:02:55.760375  975257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:02:55.776966  975257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0120 12:02:55.777373  975257 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:02:55.777828  975257 main.go:141] libmachine: Using API Version  1
	I0120 12:02:55.777854  975257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:02:55.778169  975257 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:02:55.778407  975257 main.go:141] libmachine: (multinode-222827-m03) Calling .GetState
	I0120 12:02:55.780081  975257 status.go:371] multinode-222827-m03 host status = "Stopped" (err=<nil>)
	I0120 12:02:55.780094  975257 status.go:384] host is not running, skipping remaining checks
	I0120 12:02:55.780101  975257 status.go:176] multinode-222827-m03 status: &{Name:multinode-222827-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-222827 node start m03 -v=7 --alsologtostderr: (38.19074166s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222827
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-222827
E0120 12:04:37.404525  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-222827: (3m3.227103984s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222827 --wait=true -v=8 --alsologtostderr
E0120 12:07:41.309003  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222827 --wait=true -v=8 --alsologtostderr: (2m20.869160732s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222827
--- PASS: TestMultiNode/serial/RestartKeepsNodes (324.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-222827 node delete m03: (1.960490603s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 stop
E0120 12:09:37.402750  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:10:44.376977  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-222827 stop: (3m1.876657618s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222827 status: exit status 7 (94.812483ms)

                                                
                                                
-- stdout --
	multinode-222827
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-222827-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr: exit status 7 (83.36745ms)

                                                
                                                
-- stdout --
	multinode-222827
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-222827-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:12:03.348960  978217 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:12:03.349247  978217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:12:03.349258  978217 out.go:358] Setting ErrFile to fd 2...
	I0120 12:12:03.349263  978217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:12:03.349475  978217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:12:03.349683  978217 out.go:352] Setting JSON to false
	I0120 12:12:03.349719  978217 mustload.go:65] Loading cluster: multinode-222827
	I0120 12:12:03.349829  978217 notify.go:220] Checking for updates...
	I0120 12:12:03.350215  978217 config.go:182] Loaded profile config "multinode-222827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:12:03.350243  978217 status.go:174] checking status of multinode-222827 ...
	I0120 12:12:03.350799  978217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:12:03.350840  978217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:12:03.365459  978217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0120 12:12:03.365828  978217 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:12:03.366413  978217 main.go:141] libmachine: Using API Version  1
	I0120 12:12:03.366442  978217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:12:03.366768  978217 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:12:03.366984  978217 main.go:141] libmachine: (multinode-222827) Calling .GetState
	I0120 12:12:03.368428  978217 status.go:371] multinode-222827 host status = "Stopped" (err=<nil>)
	I0120 12:12:03.368441  978217 status.go:384] host is not running, skipping remaining checks
	I0120 12:12:03.368446  978217 status.go:176] multinode-222827 status: &{Name:multinode-222827 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:12:03.368462  978217 status.go:174] checking status of multinode-222827-m02 ...
	I0120 12:12:03.368717  978217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:12:03.368752  978217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:12:03.382986  978217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0120 12:12:03.383391  978217 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:12:03.383796  978217 main.go:141] libmachine: Using API Version  1
	I0120 12:12:03.383815  978217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:12:03.384091  978217 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:12:03.384264  978217 main.go:141] libmachine: (multinode-222827-m02) Calling .GetState
	I0120 12:12:03.385560  978217 status.go:371] multinode-222827-m02 host status = "Stopped" (err=<nil>)
	I0120 12:12:03.385575  978217 status.go:384] host is not running, skipping remaining checks
	I0120 12:12:03.385582  978217 status.go:176] multinode-222827-m02 status: &{Name:multinode-222827-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222827 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 12:12:41.308146  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222827 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.101251628s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-222827 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-222827
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222827-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-222827-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.381231ms)

                                                
                                                
-- stdout --
	* [multinode-222827-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-222827-m02' is duplicated with machine name 'multinode-222827-m02' in profile 'multinode-222827'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-222827-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-222827-m03 --driver=kvm2  --container-runtime=crio: (39.022242295s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-222827
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-222827: exit status 80 (210.008375ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-222827 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-222827-m03 already exists in multinode-222827-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-222827-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.33s)

                                                
                                    
x
+
TestScheduledStopUnix (110.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-293264 --memory=2048 --driver=kvm2  --container-runtime=crio
E0120 12:17:41.307919  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-293264 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.906171428s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293264 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-293264 -n scheduled-stop-293264
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293264 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 12:17:56.598228  949656 retry.go:31] will retry after 126.659µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.599383  949656 retry.go:31] will retry after 174.904µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.600536  949656 retry.go:31] will retry after 175.755µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.601656  949656 retry.go:31] will retry after 372.112µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.602779  949656 retry.go:31] will retry after 297.61µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.603895  949656 retry.go:31] will retry after 609.482µs: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.605015  949656 retry.go:31] will retry after 1.358422ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.607238  949656 retry.go:31] will retry after 1.184536ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.609445  949656 retry.go:31] will retry after 3.794645ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.613643  949656 retry.go:31] will retry after 4.215854ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.618865  949656 retry.go:31] will retry after 5.918777ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.625113  949656 retry.go:31] will retry after 7.8147ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.633342  949656 retry.go:31] will retry after 14.404993ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.648555  949656 retry.go:31] will retry after 22.028966ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
I0120 12:17:56.670711  949656 retry.go:31] will retry after 32.34275ms: open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/scheduled-stop-293264/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293264 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293264 -n scheduled-stop-293264
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293264
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293264 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293264
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-293264: exit status 7 (73.815925ms)

                                                
                                                
-- stdout --
	scheduled-stop-293264
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293264 -n scheduled-stop-293264
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293264 -n scheduled-stop-293264: exit status 7 (68.701065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-293264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-293264
--- PASS: TestScheduledStopUnix (110.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (224.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2643278783 start -p running-upgrade-438919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0120 12:19:20.479555  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:19:37.399545  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/addons-158281/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2643278783 start -p running-upgrade-438919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.675798296s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-438919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-438919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.331754408s)
helpers_test.go:175: Cleaning up "running-upgrade-438919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-438919
--- PASS: TestRunningBinaryUpgrade (224.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.707269ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-378897] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-378897 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-378897 --driver=kvm2  --container-runtime=crio: (1m34.397257053s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-378897 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.66s)

                                                
                                    
x
+
TestPause/serial/Start (101.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-298045 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-298045 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m41.183770554s)
--- PASS: TestPause/serial/Start (101.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m5.481706022s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-378897 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-378897 status -o json: exit status 2 (276.831294ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-378897","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-378897
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-378897: (1.019345903s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-378897 --no-kubernetes --driver=kvm2  --container-runtime=crio: (34.237490358s)
--- PASS: TestNoKubernetes/serial/Start (34.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-378897 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-378897 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.187432ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (16.246611686s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0120 12:22:41.307999  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.26944112s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-816069 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-816069 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.545551ms)

                                                
                                                
-- stdout --
	* [false-816069] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:22:29.314985  985773 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:22:29.315076  985773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:22:29.315084  985773 out.go:358] Setting ErrFile to fd 2...
	I0120 12:22:29.315089  985773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:22:29.315269  985773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-942401/.minikube/bin
	I0120 12:22:29.315865  985773 out.go:352] Setting JSON to false
	I0120 12:22:29.316937  985773 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":18292,"bootTime":1737357457,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:22:29.317051  985773 start.go:139] virtualization: kvm guest
	I0120 12:22:29.319193  985773 out.go:177] * [false-816069] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:22:29.320455  985773 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:22:29.320471  985773 notify.go:220] Checking for updates...
	I0120 12:22:29.322548  985773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:22:29.323602  985773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-942401/kubeconfig
	I0120 12:22:29.324592  985773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-942401/.minikube
	I0120 12:22:29.325517  985773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:22:29.326442  985773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:22:29.327968  985773 config.go:182] Loaded profile config "NoKubernetes-378897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0120 12:22:29.328078  985773 config.go:182] Loaded profile config "kubernetes-upgrade-049625": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 12:22:29.328233  985773 config.go:182] Loaded profile config "running-upgrade-438919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 12:22:29.328352  985773 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:22:29.363893  985773 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:22:29.365021  985773 start.go:297] selected driver: kvm2
	I0120 12:22:29.365037  985773 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:22:29.365049  985773 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:22:29.366717  985773 out.go:201] 
	W0120 12:22:29.367804  985773 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0120 12:22:29.368818  985773 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-816069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.240:8443
name: running-upgrade-438919
contexts:
- context:
cluster: running-upgrade-438919
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-438919
name: running-upgrade-438919
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-438919
user:
client-certificate: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.crt
client-key: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-816069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816069"

                                                
                                                
----------------------- debugLogs end: false-816069 [took: 2.625622808s] --------------------------------
helpers_test.go:175: Cleaning up "false-816069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-816069
--- PASS: TestNetworkPlugins/group/false (2.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3602598824 start -p stopped-upgrade-038534 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3602598824 start -p stopped-upgrade-038534 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (46.903018841s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3602598824 -p stopped-upgrade-038534 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3602598824 -p stopped-upgrade-038534 stop: (1.479415093s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-038534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-038534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.588447989s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-378897
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-378897: (1.297588106s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-378897 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-378897 --driver=kvm2  --container-runtime=crio: (38.243018579s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-378897 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-378897 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.334659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-038534
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m17.859838239s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496524 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [efb478fd-5792-4b2f-9e0b-bd7d4037ba73] Pending
helpers_test.go:344: "busybox" [efb478fd-5792-4b2f-9e0b-bd7d4037ba73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [efb478fd-5792-4b2f-9e0b-bd7d4037ba73] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003716327s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-987349 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-987349 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (53.853510637s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-496524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-496524 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-496524 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-496524 --alsologtostderr -v=3: (1m31.132664753s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-981597 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-981597 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (55.214404058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-987349 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b0fe687-b1ff-49dc-966c-e32df3907231] Pending
helpers_test.go:344: "busybox" [1b0fe687-b1ff-49dc-966c-e32df3907231] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b0fe687-b1ff-49dc-966c-e32df3907231] Running
E0120 12:27:41.308106  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003853183s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-987349 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-987349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-987349 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-987349 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-987349 --alsologtostderr -v=3: (1m30.903518476s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496524 -n no-preload-496524
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496524 -n no-preload-496524: exit status 7 (66.935562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-496524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-981597 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8ed35d0c-d510-4963-888d-6ab352a4da84] Pending
helpers_test.go:344: "busybox" [8ed35d0c-d510-4963-888d-6ab352a4da84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8ed35d0c-d510-4963-888d-6ab352a4da84] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004271111s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-981597 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-981597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-981597 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-981597 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-981597 --alsologtostderr -v=3: (1m31.04564407s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987349 -n embed-certs-987349
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987349 -n embed-certs-987349: exit status 7 (67.980403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-987349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-981597 -n default-k8s-diff-port-981597
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-981597 -n default-k8s-diff-port-981597: exit status 7 (78.253399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-981597 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-134433 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-134433 --alsologtostderr -v=3: (3.293689371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-134433 -n old-k8s-version-134433: exit status 7 (72.666293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-134433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-476001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-476001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (47.58127883s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m5.693039998s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-476001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-476001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144371676s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-476001 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-476001 --alsologtostderr -v=3: (11.725811001s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476001 -n newest-cni-476001
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476001 -n newest-cni-476001: exit status 7 (77.185475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-476001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-476001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-476001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (36.392503657s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-476001 -n newest-cni-476001
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-816069 "pgrep -a kubelet"
I0120 12:56:23.464397  949656 config.go:182] Loaded profile config "auto-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z2kz7" [b68b1adc-db61-42f2-b6e5-f17d780e5aae] Pending
helpers_test.go:344: "netcat-5d86dc444-z2kz7" [b68b1adc-db61-42f2-b6e5-f17d780e5aae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z2kz7" [b68b1adc-db61-42f2-b6e5-f17d780e5aae] Running
E0120 12:56:32.389668  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.396086  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.407453  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.428871  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.470330  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.551839  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:32.713495  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:33.035788  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:33.677453  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:34.959229  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004471948s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-476001 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-476001 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476001 -n newest-cni-476001
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476001 -n newest-cni-476001: exit status 2 (254.981783ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476001 -n newest-cni-476001
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476001 -n newest-cni-476001: exit status 2 (257.032564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-476001 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-476001 -n newest-cni-476001
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-476001 -n newest-cni-476001
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m2.842231869s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (116.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m56.052296821s)
--- PASS: TestNetworkPlugins/group/calico/Start (116.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (124.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0120 12:57:13.366564  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m4.733728844s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (124.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0120 12:57:41.308116  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/functional-473856/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m29.105584717s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-k7dps" [19bc5eff-3f06-4a89-9ddf-f8c8228d2383] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004283294s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-816069 "pgrep -a kubelet"
I0120 12:57:52.218250  949656 config.go:182] Loaded profile config "kindnet-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jxgc4" [996afa52-c99e-46c3-8666-4db9580af67e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 12:57:54.328388  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/no-preload-496524/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-jxgc4" [996afa52-c99e-46c3-8666-4db9580af67e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004536832s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0120 12:58:25.886702  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:25.893664  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:25.905567  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:25.927582  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:25.969339  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:26.051595  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:26.213340  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:26.535582  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:27.178137  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:28.460401  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:31.021982  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:36.144202  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.387734853s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vjznr" [af30f708-dca4-4cdb-9c1c-3b24fbd1291e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005276027s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-816069 "pgrep -a kubelet"
E0120 12:58:46.386337  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
I0120 12:58:46.419663  949656 config.go:182] Loaded profile config "calico-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2npjv" [be9274d9-417e-42d0-b520-462ce853353f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2npjv" [be9274d9-417e-42d0-b520-462ce853353f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004414461s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-816069 "pgrep -a kubelet"
I0120 12:58:58.083903  949656 config.go:182] Loaded profile config "custom-flannel-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4p4kd" [76136ffb-809d-4478-85ca-6736718b25ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4p4kd" [76136ffb-809d-4478-85ca-6736718b25ca] Running
E0120 12:59:06.868388  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/default-k8s-diff-port-981597/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00508848s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-816069 "pgrep -a kubelet"
I0120 12:59:03.027786  949656 config.go:182] Loaded profile config "enable-default-cni-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dfj9c" [6b43ccbd-c269-431e-8b6d-68c9f4788b29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dfj9c" [6b43ccbd-c269-431e-8b6d-68c9f4788b29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003735625s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-816069 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.842435104s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2qbbn" [a7c1a062-8490-41a4-81d5-8bba4e8f891b] Running
E0120 12:59:50.135751  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003857096s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-816069 "pgrep -a kubelet"
I0120 12:59:55.079505  949656 config.go:182] Loaded profile config "flannel-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-816069 replace --force -f testdata/netcat-deployment.yaml
E0120 12:59:55.257881  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dtdwj" [47b04ffa-0f9a-4d06-8c90-8515b435b26c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dtdwj" [47b04ffa-0f9a-4d06-8c90-8515b435b26c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005023255s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-816069 "pgrep -a kubelet"
I0120 13:00:17.885036  949656 config.go:182] Loaded profile config "bridge-816069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-816069 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-q665c" [2c65fd7c-fadb-482a-8021-215726e804cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-q665c" [2c65fd7c-fadb-482a-8021-215726e804cb] Running
E0120 13:00:25.981756  949656 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/old-k8s-version-134433/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004098146s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-816069 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-816069 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (39/308)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.16
271 TestNetworkPlugins/group/kubenet 3.18
279 TestNetworkPlugins/group/cilium 3.13
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-158281 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-969801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-969801
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-816069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.240:8443
name: running-upgrade-438919
contexts:
- context:
cluster: running-upgrade-438919
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-438919
name: running-upgrade-438919
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-438919
user:
client-certificate: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.crt
client-key: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-816069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816069"

                                                
                                                
----------------------- debugLogs end: kubenet-816069 [took: 3.043918692s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-816069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-816069
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-816069 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-816069" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20151-942401/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.240:8443
name: running-upgrade-438919
contexts:
- context:
cluster: running-upgrade-438919
extensions:
- extension:
last-update: Mon, 20 Jan 2025 12:22:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-438919
name: running-upgrade-438919
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-438919
user:
client-certificate: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.crt
client-key: /home/jenkins/minikube-integration/20151-942401/.minikube/profiles/running-upgrade-438919/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-816069

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-816069" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816069"

                                                
                                                
----------------------- debugLogs end: cilium-816069 [took: 2.997095463s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-816069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-816069
--- SKIP: TestNetworkPlugins/group/cilium (3.13s)

                                                
                                    
Copied to clipboard